Πέμπτη 3 Μαΐου 2018

Infant–Mother Acoustic–Prosodic Alignment and Developmental Risk

Purpose
One promising early marker for autism and other communicative and language disorders is early infant speech production. Here we used daylong recordings of high- and low-risk infant–mother dyads to examine whether acoustic–prosodic alignment as well as two automated measures of infant vocalization are related to developmental risk status indexed via familial risk and developmental progress at 36 months of age.
Method
Automated analyses of the acoustics of daylong real-world interactions were used to examine whether pitch characteristics of one vocalization by the mother or the child predicted those of the vocalization response by the other speaker and whether other features of infants' speech in daylong recordings were associated with developmental risk status or outcomes.
Results
Low-risk and high-risk dyads did not differ in the level of acoustic–prosodic alignment, which was overall not significant. Further analyses revealed that acoustic–prosodic alignment did not predict infants' later developmental progress, which was, however, associated with two automated measures of infant vocalizations (daily vocalizations and conversational turns).
Conclusions
Although further research is needed, these findings suggest that automated measures of vocalizations drawn from daylong recordings are a possible early identification tool for later developmental progress/concerns.
Supplemental Material
https://osf.io/cdn3v/

from #Audiology via ola Kala on Inoreader https://ift.tt/2FGqTSu
via IFTTT

Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

Purpose
The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression.
Study Design
Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos.
Method
A diffusive behavior detection–based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions.
Results
Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study.
Conclusion
The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

from #Audiology via ola Kala on Inoreader https://ift.tt/2js0PC4
via IFTTT

Children's Speech Perception in Noise: Evidence for Dissociation From Language and Working Memory

Purpose
We examined the association between speech perception in noise (SPIN), language abilities, and working memory (WM) capacity in school-age children. Existing studies supporting the Ease of Language Understanding (ELU) model suggest that WM capacity plays a significant role in adverse listening situations.
Method
Eighty-three children between the ages of 7 to 11 years participated. The sample represented a continuum of individual differences in attention, memory, and language abilities. All children had normal-range hearing and normal-range nonverbal IQ. Children completed the Bamford–Kowal–Bench Speech-in-Noise Test (BKB-SIN; Etymotic Research, 2005), a selective auditory attention task, and multiple measures of language and WM.
Results
Partial correlations (controlling for age) showed significant positive associations among attention, memory, and language measures. However, BKB-SIN did not correlate significantly with any of the other measures. Principal component analysis revealed a distinct WM factor and a distinct language factor. BKB-SIN was loaded robustly as a distinct 3rd factor with minimal secondary loading from sentence recall and short-term memory. Nonverbal IQ was loaded as a 4th factor.
Conclusions
Results did not support an association between SPIN and WM capacity in children. However, in this study, a single SPIN measure was used. Future studies using multiple SPIN measures are warranted. Evidence from the current study supports the use of BKB-SIN as clinical measure of speech perception ability because it was not influenced by variation in children's language and memory abilities. More large-scale studies in school-age children are needed to replicate the proposed role played by WM in adverse listening situations.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FFhdru
via IFTTT

Neighborhood Density and Syntactic Class Effects on Spoken Word Recognition: Specific Language Impairment and Typical Development

Purpose
The purpose of the current study was to determine the effect of neighborhood density and syntactic class on word recognition in children with specific language impairment (SLI) and typical development (TD).
Method
Fifteen children with SLI (M age = 6;5 [years;months]) and 15 with TD (M age = 6;4) completed a forward gating task that presented consonant–vowel–consonant dense and sparse (neighborhood density) nouns and verbs (syntactic class).
Results
On all dependent variables, the SLI group performed like the TD group. Recognition performance was highest for dense words and nouns. The majority of 1st nontarget responses shared the 1st phoneme with the target (i.e., was in the target's cohort). When considering the ranking of word types from easiest to most difficult, children showed equivalent recognition performance for dense verbs and sparse nouns, which were both easier to recognize than sparse verbs but more difficult than dense nouns.
Conclusion
The current study yields new insight into how children access lexical–phonological information and syntactic class during the process of spoken word recognition. Given the identical pattern of results for the SLI and TD groups, we hypothesize that accessing lexical–phonological information may be a strength for children with SLI. We also discuss implications for using the forward gating paradigm as a measure of word recognition.

from #Audiology via ola Kala on Inoreader https://ift.tt/2KAlShR
via IFTTT

Prosodic Boundary Effects on Syntactic Disambiguation in Children With Cochlear Implants

Purpose
This study investigated prosodic boundary effects on the comprehension of attachment ambiguities in children with cochlear implants (CIs) and normal hearing (NH) and tested the absolute boundary hypothesis and the relative boundary hypothesis. Processing speed was also investigated.
Method
Fifteen children with NH and 13 children with CIs (ages 8–12 years) who are monolingual speakers of Brazilian Portuguese participated in a computerized comprehension task with sentences containing prepositional phrase attachment ambiguity and manipulations of prosodic boundaries.
Results
Children with NH and children with CIs differed in how they used prosodic forms to disambiguate sentences. Children in both groups provided responses consistent with half of the predictions of the relative boundary hypothesis. The absolute boundary hypothesis did not characterize the syntactic disambiguation of children with CIs. Processing speed was similar in both groups.
Conclusions
Children with CIs do not use prosodic information to disambiguate sentences or to facilitate comprehension of unambiguous sentences similarly to children with NH. The results suggest that cross-linguistic differences may interact with syntactic disambiguation. Prosodic contrasts that affect sentence comprehension need to be addressed directly in intervention with children with CIs.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FFguGM
via IFTTT

Does Implicit Voice Learning Improve Spoken Language Processing? Implications for Clinical Practice

Purpose
In typical interactions with other speakers, including a clinical environment, listeners become familiar with voices through implicit learning. Previous studies have found evidence for a Familiar Talker Advantage (better speech perception and spoken language processing for familiar voices) following explicit voice learning. The current study examined whether a Familiar Talker Advantage would result from implicit voice learning.
Method
Thirty-three adults and 16 second graders were familiarized with 1 of 2 talkers' voices over 2 days through live interactions as 1 of 2 experimenters administered standardized tests and interacted with the listeners. To assess whether this implicit voice learning would generate a Familiar Talker Advantage, listeners completed a baseline sentence recognition task and a post-learning sentence recognition task with both the familiar talker and the unfamiliar talker.
Results
No significant effect of voice familiarity was found for either the children or the adults following implicit voice learning. Effect size estimates suggest that familiarity with the voice may benefit some listeners, despite the lack of an overall effect of familiarity.
Discussion
We discuss possible clinical implications of this finding and directions for future research.

from #Audiology via ola Kala on Inoreader https://ift.tt/2jpONJ6
via IFTTT

Nonword Repetition and Language Outcomes in Young Children Born Preterm

Purpose
The aims of this study were to examine phonological short-term memory in children born preterm (PT) and to explore relations between this neuropsychological process and later language skills.
Method
Children born PT (n = 74) and full term (FT; n = 60) participated in a nonword repetition (NWR) task at 36 months old. Standardized measures of language skills were administered at 36 and 54 months old. Group differences in NWR task completion and NWR scores were analyzed. Hierarchical multiple regression analyses examined the extent to which NWR ability predicted later performance on language measures.
Results
More children born PT than FT did not complete the NWR task. Among children who completed the task, the performance of children born PT and FT was not statistically different. NWR scores at 36 months old accounted for significant unique variance in language scores at 54 months old in both groups. Birth group did not moderate the relation between NWR and later language performance.
Conclusions
These findings suggest that phonological short-term memory is an important skill underlying language development in both children born PT and FT. These findings have relevance to clinical practice in assessing children born PT.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FFh6w4
via IFTTT

Infant–Mother Acoustic–Prosodic Alignment and Developmental Risk

Purpose
One promising early marker for autism and other communicative and language disorders is early infant speech production. Here we used daylong recordings of high- and low-risk infant–mother dyads to examine whether acoustic–prosodic alignment as well as two automated measures of infant vocalization are related to developmental risk status indexed via familial risk and developmental progress at 36 months of age.
Method
Automated analyses of the acoustics of daylong real-world interactions were used to examine whether pitch characteristics of one vocalization by the mother or the child predicted those of the vocalization response by the other speaker and whether other features of infants' speech in daylong recordings were associated with developmental risk status or outcomes.
Results
Low-risk and high-risk dyads did not differ in the level of acoustic–prosodic alignment, which was overall not significant. Further analyses revealed that acoustic–prosodic alignment did not predict infants' later developmental progress, which was, however, associated with two automated measures of infant vocalizations (daily vocalizations and conversational turns).
Conclusions
Although further research is needed, these findings suggest that automated measures of vocalizations drawn from daylong recordings are a possible early identification tool for later developmental progress/concerns.
Supplemental Material
https://osf.io/cdn3v/

from #Audiology via ola Kala on Inoreader https://ift.tt/2FGqTSu
via IFTTT

Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

Purpose
The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression.
Study Design
Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos.
Method
A diffusive behavior detection–based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions.
Results
Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study.
Conclusion
The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

from #Audiology via ola Kala on Inoreader https://ift.tt/2js0PC4
via IFTTT

Children's Speech Perception in Noise: Evidence for Dissociation From Language and Working Memory

Purpose
We examined the association between speech perception in noise (SPIN), language abilities, and working memory (WM) capacity in school-age children. Existing studies supporting the Ease of Language Understanding (ELU) model suggest that WM capacity plays a significant role in adverse listening situations.
Method
Eighty-three children between the ages of 7 to 11 years participated. The sample represented a continuum of individual differences in attention, memory, and language abilities. All children had normal-range hearing and normal-range nonverbal IQ. Children completed the Bamford–Kowal–Bench Speech-in-Noise Test (BKB-SIN; Etymotic Research, 2005), a selective auditory attention task, and multiple measures of language and WM.
Results
Partial correlations (controlling for age) showed significant positive associations among attention, memory, and language measures. However, BKB-SIN did not correlate significantly with any of the other measures. Principal component analysis revealed a distinct WM factor and a distinct language factor. BKB-SIN was loaded robustly as a distinct 3rd factor with minimal secondary loading from sentence recall and short-term memory. Nonverbal IQ was loaded as a 4th factor.
Conclusions
Results did not support an association between SPIN and WM capacity in children. However, in this study, a single SPIN measure was used. Future studies using multiple SPIN measures are warranted. Evidence from the current study supports the use of BKB-SIN as clinical measure of speech perception ability because it was not influenced by variation in children's language and memory abilities. More large-scale studies in school-age children are needed to replicate the proposed role played by WM in adverse listening situations.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FFhdru
via IFTTT

Neighborhood Density and Syntactic Class Effects on Spoken Word Recognition: Specific Language Impairment and Typical Development

Purpose
The purpose of the current study was to determine the effect of neighborhood density and syntactic class on word recognition in children with specific language impairment (SLI) and typical development (TD).
Method
Fifteen children with SLI (M age = 6;5 [years;months]) and 15 with TD (M age = 6;4) completed a forward gating task that presented consonant–vowel–consonant dense and sparse (neighborhood density) nouns and verbs (syntactic class).
Results
On all dependent variables, the SLI group performed like the TD group. Recognition performance was highest for dense words and nouns. The majority of 1st nontarget responses shared the 1st phoneme with the target (i.e., was in the target's cohort). When considering the ranking of word types from easiest to most difficult, children showed equivalent recognition performance for dense verbs and sparse nouns, which were both easier to recognize than sparse verbs but more difficult than dense nouns.
Conclusion
The current study yields new insight into how children access lexical–phonological information and syntactic class during the process of spoken word recognition. Given the identical pattern of results for the SLI and TD groups, we hypothesize that accessing lexical–phonological information may be a strength for children with SLI. We also discuss implications for using the forward gating paradigm as a measure of word recognition.

from #Audiology via ola Kala on Inoreader https://ift.tt/2KAlShR
via IFTTT

Prosodic Boundary Effects on Syntactic Disambiguation in Children With Cochlear Implants

Purpose
This study investigated prosodic boundary effects on the comprehension of attachment ambiguities in children with cochlear implants (CIs) and normal hearing (NH) and tested the absolute boundary hypothesis and the relative boundary hypothesis. Processing speed was also investigated.
Method
Fifteen children with NH and 13 children with CIs (ages 8–12 years) who are monolingual speakers of Brazilian Portuguese participated in a computerized comprehension task with sentences containing prepositional phrase attachment ambiguity and manipulations of prosodic boundaries.
Results
Children with NH and children with CIs differed in how they used prosodic forms to disambiguate sentences. Children in both groups provided responses consistent with half of the predictions of the relative boundary hypothesis. The absolute boundary hypothesis did not characterize the syntactic disambiguation of children with CIs. Processing speed was similar in both groups.
Conclusions
Children with CIs do not use prosodic information to disambiguate sentences or to facilitate comprehension of unambiguous sentences similarly to children with NH. The results suggest that cross-linguistic differences may interact with syntactic disambiguation. Prosodic contrasts that affect sentence comprehension need to be addressed directly in intervention with children with CIs.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FFguGM
via IFTTT

Does Implicit Voice Learning Improve Spoken Language Processing? Implications for Clinical Practice

Purpose
In typical interactions with other speakers, including a clinical environment, listeners become familiar with voices through implicit learning. Previous studies have found evidence for a Familiar Talker Advantage (better speech perception and spoken language processing for familiar voices) following explicit voice learning. The current study examined whether a Familiar Talker Advantage would result from implicit voice learning.
Method
Thirty-three adults and 16 second graders were familiarized with 1 of 2 talkers' voices over 2 days through live interactions as 1 of 2 experimenters administered standardized tests and interacted with the listeners. To assess whether this implicit voice learning would generate a Familiar Talker Advantage, listeners completed a baseline sentence recognition task and a post-learning sentence recognition task with both the familiar talker and the unfamiliar talker.
Results
No significant effect of voice familiarity was found for either the children or the adults following implicit voice learning. Effect size estimates suggest that familiarity with the voice may benefit some listeners, despite the lack of an overall effect of familiarity.
Discussion
We discuss possible clinical implications of this finding and directions for future research.

from #Audiology via ola Kala on Inoreader https://ift.tt/2jpONJ6
via IFTTT

Nonword Repetition and Language Outcomes in Young Children Born Preterm

Purpose
The aims of this study were to examine phonological short-term memory in children born preterm (PT) and to explore relations between this neuropsychological process and later language skills.
Method
Children born PT (n = 74) and full term (FT; n = 60) participated in a nonword repetition (NWR) task at 36 months old. Standardized measures of language skills were administered at 36 and 54 months old. Group differences in NWR task completion and NWR scores were analyzed. Hierarchical multiple regression analyses examined the extent to which NWR ability predicted later performance on language measures.
Results
More children born PT than FT did not complete the NWR task. Among children who completed the task, the performance of children born PT and FT was not statistically different. NWR scores at 36 months old accounted for significant unique variance in language scores at 54 months old in both groups. Birth group did not moderate the relation between NWR and later language performance.
Conclusions
These findings suggest that phonological short-term memory is an important skill underlying language development in both children born PT and FT. These findings have relevance to clinical practice in assessing children born PT.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FFh6w4
via IFTTT

Infant–Mother Acoustic–Prosodic Alignment and Developmental Risk

Purpose
One promising early marker for autism and other communicative and language disorders is early infant speech production. Here we used daylong recordings of high- and low-risk infant–mother dyads to examine whether acoustic–prosodic alignment as well as two automated measures of infant vocalization are related to developmental risk status indexed via familial risk and developmental progress at 36 months of age.
Method
Automated analyses of the acoustics of daylong real-world interactions were used to examine whether pitch characteristics of one vocalization by the mother or the child predicted those of the vocalization response by the other speaker and whether other features of infants' speech in daylong recordings were associated with developmental risk status or outcomes.
Results
Low-risk and high-risk dyads did not differ in the level of acoustic–prosodic alignment, which was overall not significant. Further analyses revealed that acoustic–prosodic alignment did not predict infants' later developmental progress, which was, however, associated with two automated measures of infant vocalizations (daily vocalizations and conversational turns).
Conclusions
Although further research is needed, these findings suggest that automated measures of vocalizations drawn from daylong recordings are a possible early identification tool for later developmental progress/concerns.
Supplemental Material
https://osf.io/cdn3v/

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2FGqTSu
via IFTTT

Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

Purpose
The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression.
Study Design
Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos.
Method
A diffusive behavior detection–based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions.
Results
Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study.
Conclusion
The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2js0PC4
via IFTTT

Children's Speech Perception in Noise: Evidence for Dissociation From Language and Working Memory

Purpose
We examined the association between speech perception in noise (SPIN), language abilities, and working memory (WM) capacity in school-age children. Existing studies supporting the Ease of Language Understanding (ELU) model suggest that WM capacity plays a significant role in adverse listening situations.
Method
Eighty-three children between the ages of 7 to 11 years participated. The sample represented a continuum of individual differences in attention, memory, and language abilities. All children had normal-range hearing and normal-range nonverbal IQ. Children completed the Bamford–Kowal–Bench Speech-in-Noise Test (BKB-SIN; Etymotic Research, 2005), a selective auditory attention task, and multiple measures of language and WM.
Results
Partial correlations (controlling for age) showed significant positive associations among attention, memory, and language measures. However, BKB-SIN did not correlate significantly with any of the other measures. Principal component analysis revealed a distinct WM factor and a distinct language factor. BKB-SIN was loaded robustly as a distinct 3rd factor with minimal secondary loading from sentence recall and short-term memory. Nonverbal IQ was loaded as a 4th factor.
Conclusions
Results did not support an association between SPIN and WM capacity in children. However, in this study, a single SPIN measure was used. Future studies using multiple SPIN measures are warranted. Evidence from the current study supports the use of BKB-SIN as clinical measure of speech perception ability because it was not influenced by variation in children's language and memory abilities. More large-scale studies in school-age children are needed to replicate the proposed role played by WM in adverse listening situations.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2FFhdru
via IFTTT

Neighborhood Density and Syntactic Class Effects on Spoken Word Recognition: Specific Language Impairment and Typical Development

Purpose
The purpose of the current study was to determine the effect of neighborhood density and syntactic class on word recognition in children with specific language impairment (SLI) and typical development (TD).
Method
Fifteen children with SLI (M age = 6;5 [years;months]) and 15 with TD (M age = 6;4) completed a forward gating task that presented consonant–vowel–consonant dense and sparse (neighborhood density) nouns and verbs (syntactic class).
Results
On all dependent variables, the SLI group performed like the TD group. Recognition performance was highest for dense words and nouns. The majority of 1st nontarget responses shared the 1st phoneme with the target (i.e., was in the target's cohort). When considering the ranking of word types from easiest to most difficult, children showed equivalent recognition performance for dense verbs and sparse nouns, which were both easier to recognize than sparse verbs but more difficult than dense nouns.
Conclusion
The current study yields new insight into how children access lexical–phonological information and syntactic class during the process of spoken word recognition. Given the identical pattern of results for the SLI and TD groups, we hypothesize that accessing lexical–phonological information may be a strength for children with SLI. We also discuss implications for using the forward gating paradigm as a measure of word recognition.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2KAlShR
via IFTTT

Prosodic Boundary Effects on Syntactic Disambiguation in Children With Cochlear Implants

Purpose
This study investigated prosodic boundary effects on the comprehension of attachment ambiguities in children with cochlear implants (CIs) and normal hearing (NH) and tested the absolute boundary hypothesis and the relative boundary hypothesis. Processing speed was also investigated.
Method
Fifteen children with NH and 13 children with CIs (ages 8–12 years) who are monolingual speakers of Brazilian Portuguese participated in a computerized comprehension task with sentences containing prepositional phrase attachment ambiguity and manipulations of prosodic boundaries.
Results
Children with NH and children with CIs differed in how they used prosodic forms to disambiguate sentences. Children in both groups provided responses consistent with half of the predictions of the relative boundary hypothesis. The absolute boundary hypothesis did not characterize the syntactic disambiguation of children with CIs. Processing speed was similar in both groups.
Conclusions
Children with CIs do not use prosodic information to disambiguate sentences or to facilitate comprehension of unambiguous sentences similarly to children with NH. The results suggest that cross-linguistic differences may interact with syntactic disambiguation. Prosodic contrasts that affect sentence comprehension need to be addressed directly in intervention with children with CIs.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2FFguGM
via IFTTT

Does Implicit Voice Learning Improve Spoken Language Processing? Implications for Clinical Practice

Purpose
In typical interactions with other speakers, including a clinical environment, listeners become familiar with voices through implicit learning. Previous studies have found evidence for a Familiar Talker Advantage (better speech perception and spoken language processing for familiar voices) following explicit voice learning. The current study examined whether a Familiar Talker Advantage would result from implicit voice learning.
Method
Thirty-three adults and 16 second graders were familiarized with 1 of 2 talkers' voices over 2 days through live interactions as 1 of 2 experimenters administered standardized tests and interacted with the listeners. To assess whether this implicit voice learning would generate a Familiar Talker Advantage, listeners completed a baseline sentence recognition task and a post-learning sentence recognition task with both the familiar talker and the unfamiliar talker.
Results
No significant effect of voice familiarity was found for either the children or the adults following implicit voice learning. Effect size estimates suggest that familiarity with the voice may benefit some listeners, despite the lack of an overall effect of familiarity.
Discussion
We discuss possible clinical implications of this finding and directions for future research.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2jpONJ6
via IFTTT

Nonword Repetition and Language Outcomes in Young Children Born Preterm

Purpose
The aims of this study were to examine phonological short-term memory in children born preterm (PT) and to explore relations between this neuropsychological process and later language skills.
Method
Children born PT (n = 74) and full term (FT; n = 60) participated in a nonword repetition (NWR) task at 36 months old. Standardized measures of language skills were administered at 36 and 54 months old. Group differences in NWR task completion and NWR scores were analyzed. Hierarchical multiple regression analyses examined the extent to which NWR ability predicted later performance on language measures.
Results
More children born PT than FT did not complete the NWR task. Among children who completed the task, the performance of children born PT and FT was not statistically different. NWR scores at 36 months old accounted for significant unique variance in language scores at 54 months old in both groups. Birth group did not moderate the relation between NWR and later language performance.
Conclusions
These findings suggest that phonological short-term memory is an important skill underlying language development in both children born PT and FT. These findings have relevance to clinical practice in assessing children born PT.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2FFh6w4
via IFTTT

A Multilinguistic Approach to Evaluating Student Spelling in Writing Samples

Purpose
Spelling is a critical component of literacy and language arts that can negatively influence other aspects of written composition. This clinical focus article describes a spelling error classification system that can be used to identify underlying linguistic deficits that contribute to students' spelling errors. The system is designed to take advantage of the linguistic expertise of speech-language pathologists to efficiently assess student errors in written compositions that are generated as a component of everyday classroom instruction.
Method
A review of the literature was conducted regarding spelling as a component of literacy and language arts, the development of spelling, and the linguistic contributions to spelling. Then, existing criterion-referenced measures of spelling simple and morphologically complex words were reviewed, and a new, manual technique for analyzing spelling in student written compositions was created.
Conclusions
The language expertise of speech-language pathologists enables them to readily evaluate the phonological, orthographic, and morphological errors in student misspellings, in order to identify specific underlying linguistic deficits and plan targeted interventions. The error classification system provides speech-language pathologists with a tool that is both simple and time efficient and, thus, may help increase their confidence and ability in addressing the spelling needs of students.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2wgzITo
via IFTTT

Adolescent Summaries of Narrative and Expository Discourse: Differences and Predictors

Purpose
Summarizing expository passages is a critical academic skill that is understudied in language research. The purpose of this study was to compare the quality of verbal summaries produced by adolescents for 3 different discourse types and to determine whether a composite measure of cognitive skill or a test of expressive syntax predicted their performance.
Method
Fifty adolescents listened to, and then verbally summarized, 1 narrative and 2 expository lectures (compare–contrast and cause–effect). They also participated in testing that targeted expressive syntax and 5 cognitive subdomains.
Results
Summary quality scores were significantly different across discourse types, with a medium effect size. Analyses revealed significantly higher summary quality scores for cause–effect than compare–contrast summaries. Although the composite cognitive measure contributed significantly to the prediction of quality scores for both types of expository summaries, the expressive syntax score only contributed significantly to the quality scores for narrative summaries.
Conclusions
These results support previous research indicating that type of expository discourse may impact student performance. These results also show, for the first time, that cognition may play a predictive role in determining summary quality for expository but not narrative passages in this population. In addition, despite the more complex syntax commonly associated with exposition versus narratives, an expressive syntax score was only predictive of performance on narrative summaries. These findings provide new information, questions, and directions for future research for those who study academic discourse and for professionals who must identify and manage the problems of students struggling with different types of academic discourse.
Supplemental Material
https://doi.org/10.23641/asha.6167879

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2rhJMXp
via IFTTT

A Multilinguistic Approach to Evaluating Student Spelling in Writing Samples

Purpose
Spelling is a critical component of literacy and language arts that can negatively influence other aspects of written composition. This clinical focus article describes a spelling error classification system that can be used to identify underlying linguistic deficits that contribute to students' spelling errors. The system is designed to take advantage of the linguistic expertise of speech-language pathologists to efficiently assess student errors in written compositions that are generated as a component of everyday classroom instruction.
Method
A review of the literature was conducted regarding spelling as a component of literacy and language arts, the development of spelling, and the linguistic contributions to spelling. Then, existing criterion-referenced measures of spelling simple and morphologically complex words were reviewed, and a new, manual technique for analyzing spelling in student written compositions was created.
Conclusions
The language expertise of speech-language pathologists enables them to readily evaluate the phonological, orthographic, and morphological errors in student misspellings, in order to identify specific underlying linguistic deficits and plan targeted interventions. The error classification system provides speech-language pathologists with a tool that is both simple and time efficient and, thus, may help increase their confidence and ability in addressing the spelling needs of students.

from #Audiology via ola Kala on Inoreader https://ift.tt/2wgzITo
via IFTTT

Adolescent Summaries of Narrative and Expository Discourse: Differences and Predictors

Purpose
Summarizing expository passages is a critical academic skill that is understudied in language research. The purpose of this study was to compare the quality of verbal summaries produced by adolescents for 3 different discourse types and to determine whether a composite measure of cognitive skill or a test of expressive syntax predicted their performance.
Method
Fifty adolescents listened to, and then verbally summarized, 1 narrative and 2 expository lectures (compare–contrast and cause–effect). They also participated in testing that targeted expressive syntax and 5 cognitive subdomains.
Results
Summary quality scores were significantly different across discourse types, with a medium effect size. Analyses revealed significantly higher summary quality scores for cause–effect than compare–contrast summaries. Although the composite cognitive measure contributed significantly to the prediction of quality scores for both types of expository summaries, the expressive syntax score only contributed significantly to the quality scores for narrative summaries.
Conclusions
These results support previous research indicating that type of expository discourse may impact student performance. These results also show, for the first time, that cognition may play a predictive role in determining summary quality for expository but not narrative passages in this population. In addition, despite the more complex syntax commonly associated with exposition versus narratives, an expressive syntax score was only predictive of performance on narrative summaries. These findings provide new information, questions, and directions for future research for those who study academic discourse and for professionals who must identify and manage the problems of students struggling with different types of academic discourse.
Supplemental Material
https://doi.org/10.23641/asha.6167879

from #Audiology via ola Kala on Inoreader https://ift.tt/2rhJMXp
via IFTTT

A Multilinguistic Approach to Evaluating Student Spelling in Writing Samples

Purpose
Spelling is a critical component of literacy and language arts that can negatively influence other aspects of written composition. This clinical focus article describes a spelling error classification system that can be used to identify underlying linguistic deficits that contribute to students' spelling errors. The system is designed to take advantage of the linguistic expertise of speech-language pathologists to efficiently assess student errors in written compositions that are generated as a component of everyday classroom instruction.
Method
A review of the literature was conducted regarding spelling as a component of literacy and language arts, the development of spelling, and the linguistic contributions to spelling. Then, existing criterion-referenced measures of spelling simple and morphologically complex words were reviewed, and a new, manual technique for analyzing spelling in student written compositions was created.
Conclusions
The language expertise of speech-language pathologists enables them to readily evaluate the phonological, orthographic, and morphological errors in student misspellings, in order to identify specific underlying linguistic deficits and plan targeted interventions. The error classification system provides speech-language pathologists with a tool that is both simple and time efficient and, thus, may help increase their confidence and ability in addressing the spelling needs of students.

from #Audiology via ola Kala on Inoreader https://ift.tt/2wgzITo
via IFTTT

Adolescent Summaries of Narrative and Expository Discourse: Differences and Predictors

Purpose
Summarizing expository passages is a critical academic skill that is understudied in language research. The purpose of this study was to compare the quality of verbal summaries produced by adolescents for 3 different discourse types and to determine whether a composite measure of cognitive skill or a test of expressive syntax predicted their performance.
Method
Fifty adolescents listened to, and then verbally summarized, 1 narrative and 2 expository lectures (compare–contrast and cause–effect). They also participated in testing that targeted expressive syntax and 5 cognitive subdomains.
Results
Summary quality scores were significantly different across discourse types, with a medium effect size. Analyses revealed significantly higher summary quality scores for cause–effect than compare–contrast summaries. Although the composite cognitive measure contributed significantly to the prediction of quality scores for both types of expository summaries, the expressive syntax score only contributed significantly to the quality scores for narrative summaries.
Conclusions
These results support previous research indicating that type of expository discourse may impact student performance. These results also show, for the first time, that cognition may play a predictive role in determining summary quality for expository but not narrative passages in this population. In addition, despite the more complex syntax commonly associated with exposition versus narratives, an expressive syntax score was only predictive of performance on narrative summaries. These findings provide new information, questions, and directions for future research for those who study academic discourse and for professionals who must identify and manage the problems of students struggling with different types of academic discourse.
Supplemental Material
https://doi.org/10.23641/asha.6167879

from #Audiology via ola Kala on Inoreader https://ift.tt/2rhJMXp
via IFTTT

Immediate Passage Comprehension and Encoding of Information Into Long-Term Memory in Children With Normal Hearing: The Effect of Voice Quality and Multitalker Babble Noise

Purpose
This study examines how voice quality and multitalker babble noise affect immediate passage comprehension and the efficiency of information encoding into long-term memory in children with normal hearing.
Method
Eighteen children (mean age = 9 years) with normal hearing participated. Immediate passage comprehension performance and delayed performance (after 5 to 8 days) were assessed for 4 listening conditions: a typical voice in quiet, a typical voice in noise, a dysphonic voice in quiet, and a dysphonic voice in noise.
Results
Multitalker babble noise had a significant effect on immediate and delayed performance. This effect was more pronounced for delayed performance. No significant main effect of voice quality was seen on immediate or delayed performance.
Conclusions
Multitalker babble noise impairs immediate passage comprehension and encoding of information into long-term memory for later recall in children with normal hearing. In learning situations where competing speech signals are present, background noise may reduce the prerequisites for optimal learning.

from #Audiology via ola Kala on Inoreader https://ift.tt/2jpaZmC
via IFTTT

The Effect of Presentation Level on the SCAN-3 in Children and Adults

Purpose
The pediatric and adult versions of the SCAN-3 test (Keith, 2009a, 2009b) are widely used to screen and diagnose auditory processing disorders. According to the instruction manual, the test administration is flexible in that it may be administered through an audiometer at 50 dB HL or a portable CD player at the patient or administrator's most comfortable listening level (MCL). Because MCL may vary across individuals, even in those with normal hearing sensitivity, this study explored whether the presentation level affected scores on the SCAN-3 for both pediatric and adult populations.
Method
Twenty-two young adults and 23 children with normal hearing sensitivity and middle ear function were administered the SCAN-3 three different times at 1-month intervals, at 40, 50, and 60 dB HL. The stimulus level of the SCAN-3 was counterbalanced across participants to eliminate test order effects. In addition, MCL was measured in the pediatric participants during each session.
Results
MCL varied significantly across children as well as between test sessions, ranging from 40 to 75 dB HL. Performance on 3 of the 4 subtests administered, as well as composite scores, was significantly different across presentation levels (based on scaled scores). Effect sizes were also calculated and found to be strong. The number of composite scores interpreted as within normal limits versus borderline or disordered was also statistically different across presentation levels.
Conclusions
Presentation level appears to affect performance on auditory figure ground, monaural low-redundancy, and binaural integration types of auditory processing tasks that are measured by the SCAN-3. In children, MCL was found to vary significantly both between and within individuals. Although several professions outside audiology are qualified to administer the SCAN-3, it is likely that many of these individuals administer the test without an audiometer and would use an MCL to determine presentation level. It is recommended that SCAN-3 users administer the test through an audiometer at 50 dB HL, rather than with a portable CD player, using MCL values to avoid any presentation level effects.

from #Audiology via ola Kala on Inoreader https://ift.tt/2HOkab2
via IFTTT

Immediate Passage Comprehension and Encoding of Information Into Long-Term Memory in Children With Normal Hearing: The Effect of Voice Quality and Multitalker Babble Noise

Purpose
This study examines how voice quality and multitalker babble noise affect immediate passage comprehension and the efficiency of information encoding into long-term memory in children with normal hearing.
Method
Eighteen children (mean age = 9 years) with normal hearing participated. Immediate passage comprehension performance and delayed performance (after 5 to 8 days) were assessed for 4 listening conditions: a typical voice in quiet, a typical voice in noise, a dysphonic voice in quiet, and a dysphonic voice in noise.
Results
Multitalker babble noise had a significant effect on immediate and delayed performance. This effect was more pronounced for delayed performance. No significant main effect of voice quality was seen on immediate or delayed performance.
Conclusions
Multitalker babble noise impairs immediate passage comprehension and encoding of information into long-term memory for later recall in children with normal hearing. In learning situations where competing speech signals are present, background noise may reduce the prerequisites for optimal learning.

from #Audiology via ola Kala on Inoreader https://ift.tt/2jpaZmC
via IFTTT

The Effect of Presentation Level on the SCAN-3 in Children and Adults

Purpose
The pediatric and adult versions of the SCAN-3 test (Keith, 2009a, 2009b) are widely used to screen and diagnose auditory processing disorders. According to the instruction manual, the test administration is flexible in that it may be administered through an audiometer at 50 dB HL or a portable CD player at the patient or administrator's most comfortable listening level (MCL). Because MCL may vary across individuals, even in those with normal hearing sensitivity, this study explored whether the presentation level affected scores on the SCAN-3 for both pediatric and adult populations.
Method
Twenty-two young adults and 23 children with normal hearing sensitivity and middle ear function were administered the SCAN-3 three different times at 1-month intervals, at 40, 50, and 60 dB HL. The stimulus level of the SCAN-3 was counterbalanced across participants to eliminate test order effects. In addition, MCL was measured in the pediatric participants during each session.
Results
MCL varied significantly across children as well as between test sessions, ranging from 40 to 75 dB HL. Performance on 3 of the 4 subtests administered, as well as composite scores, was significantly different across presentation levels (based on scaled scores). Effect sizes were also calculated and found to be strong. The number of composite scores interpreted as within normal limits versus borderline or disordered was also statistically different across presentation levels.
Conclusions
Presentation level appears to affect performance on auditory figure ground, monaural low-redundancy, and binaural integration types of auditory processing tasks that are measured by the SCAN-3. In children, MCL was found to vary significantly both between and within individuals. Although several professions outside audiology are qualified to administer the SCAN-3, it is likely that many of these individuals administer the test without an audiometer and would use an MCL to determine presentation level. It is recommended that SCAN-3 users administer the test through an audiometer at 50 dB HL, rather than with a portable CD player, using MCL values to avoid any presentation level effects.

from #Audiology via ola Kala on Inoreader https://ift.tt/2HOkab2
via IFTTT

Immediate Passage Comprehension and Encoding of Information Into Long-Term Memory in Children With Normal Hearing: The Effect of Voice Quality and Multitalker Babble Noise

Purpose
This study examines how voice quality and multitalker babble noise affect immediate passage comprehension and the efficiency of information encoding into long-term memory in children with normal hearing.
Method
Eighteen children (mean age = 9 years) with normal hearing participated. Immediate passage comprehension performance and delayed performance (after 5 to 8 days) were assessed for 4 listening conditions: a typical voice in quiet, a typical voice in noise, a dysphonic voice in quiet, and a dysphonic voice in noise.
Results
Multitalker babble noise had a significant effect on immediate and delayed performance. This effect was more pronounced for delayed performance. No significant main effect of voice quality was seen on immediate or delayed performance.
Conclusions
Multitalker babble noise impairs immediate passage comprehension and encoding of information into long-term memory for later recall in children with normal hearing. In learning situations where competing speech signals are present, background noise may reduce the prerequisites for optimal learning.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2jpaZmC
via IFTTT

The Effect of Presentation Level on the SCAN-3 in Children and Adults

Purpose
The pediatric and adult versions of the SCAN-3 test (Keith, 2009a, 2009b) are widely used to screen and diagnose auditory processing disorders. According to the instruction manual, the test administration is flexible in that it may be administered through an audiometer at 50 dB HL or a portable CD player at the patient or administrator's most comfortable listening level (MCL). Because MCL may vary across individuals, even in those with normal hearing sensitivity, this study explored whether the presentation level affected scores on the SCAN-3 for both pediatric and adult populations.
Method
Twenty-two young adults and 23 children with normal hearing sensitivity and middle ear function were administered the SCAN-3 three different times at 1-month intervals, at 40, 50, and 60 dB HL. The stimulus level of the SCAN-3 was counterbalanced across participants to eliminate test order effects. In addition, MCL was measured in the pediatric participants during each session.
Results
MCL varied significantly across children as well as between test sessions, ranging from 40 to 75 dB HL. Performance on 3 of the 4 subtests administered, as well as composite scores, was significantly different across presentation levels (based on scaled scores). Effect sizes were also calculated and found to be strong. The number of composite scores interpreted as within normal limits versus borderline or disordered was also statistically different across presentation levels.
Conclusions
Presentation level appears to affect performance on auditory figure ground, monaural low-redundancy, and binaural integration types of auditory processing tasks that are measured by the SCAN-3. In children, MCL was found to vary significantly both between and within individuals. Although several professions outside audiology are qualified to administer the SCAN-3, it is likely that many of these individuals administer the test without an audiometer and would use an MCL to determine presentation level. It is recommended that SCAN-3 users administer the test through an audiometer at 50 dB HL, rather than with a portable CD player, using MCL values to avoid any presentation level effects.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2HOkab2
via IFTTT

HLAA Kicks Off the 2018 Walk4Hearing

logo_top_pix.jpg

The Hearing Loss Association of America (HLAA) announced the launch of the 2018 Walk4Hearing program, with events scheduled in 18 cities across the country this spring and fall to raise awareness of hearing loss and the importance of good hearing health.

Founded in 1979, HLAA promotes the rights of people with hearing loss through information, education, support, and advocacy organized via its extensive network of chapters and state organizations throughout the United States.

Walk4Hearing was introduced in 2006 and it has since raised over $13 million and welcomed more than 90,000 walkers—making it the nation’s largest program of its kind in the country. Funds raised support both local and national programs and services for people with hearing loss, including installation of hearing assistive technology in public places, provision of captioning at HLAA chapter meetings, and advocacy efforts on federal and state levels.

This year’s theme emphasizes the importance of getting your hearing screened (#screenURhearing). Untreated hearing loss impacts overall health with an increased risk of falls, isolation, anxiety, depression, and cognitive decline.

“Whether you have a hearing loss, are a friend or family member of someone with hearing loss, or you just want some help hearing in a noisy world, the Walk4Hearing offers something for you,” said HLAA executive director Barbara Kelley.

Registration to join the a walk is free at walk4hearing.org. 
Published: 4/30/2018 7:17:00 PM


from #Audiology via xlomafota13 on Inoreader https://ift.tt/2JSWZNr
via IFTTT

HLAA Kicks Off the 2018 Walk4Hearing

logo_top_pix.jpg

The Hearing Loss Association of America (HLAA) announced the launch of the 2018 Walk4Hearing program, with events scheduled in 18 cities across the country this spring and fall to raise awareness of hearing loss and the importance of good hearing health.

Founded in 1979, HLAA promotes the rights of people with hearing loss through information, education, support, and advocacy organized via its extensive network of chapters and state organizations throughout the United States.

Walk4Hearing was introduced in 2006 and it has since raised over $13 million and welcomed more than 90,000 walkers—making it the nation’s largest program of its kind in the country. Funds raised support both local and national programs and services for people with hearing loss, including installation of hearing assistive technology in public places, provision of captioning at HLAA chapter meetings, and advocacy efforts on federal and state levels.

This year’s theme emphasizes the importance of getting your hearing screened (#screenURhearing). Untreated hearing loss impacts overall health with an increased risk of falls, isolation, anxiety, depression, and cognitive decline.

“Whether you have a hearing loss, are a friend or family member of someone with hearing loss, or you just want some help hearing in a noisy world, the Walk4Hearing offers something for you,” said HLAA executive director Barbara Kelley.

Registration to join the a walk is free at walk4hearing.org. 
Published: 4/30/2018 7:17:00 PM


from #Audiology via ola Kala on Inoreader https://ift.tt/2JSWZNr
via IFTTT

HLAA Kicks Off the 2018 Walk4Hearing

logo_top_pix.jpg

The Hearing Loss Association of America (HLAA) announced the launch of the 2018 Walk4Hearing program, with events scheduled in 18 cities across the country this spring and fall to raise awareness of hearing loss and the importance of good hearing health.

Founded in 1979, HLAA promotes the rights of people with hearing loss through information, education, support, and advocacy organized via its extensive network of chapters and state organizations throughout the United States.

Walk4Hearing was introduced in 2006 and it has since raised over $13 million and welcomed more than 90,000 walkers—making it the nation’s largest program of its kind in the country. Funds raised support both local and national programs and services for people with hearing loss, including installation of hearing assistive technology in public places, provision of captioning at HLAA chapter meetings, and advocacy efforts on federal and state levels.

This year’s theme emphasizes the importance of getting your hearing screened (#screenURhearing). Untreated hearing loss impacts overall health with an increased risk of falls, isolation, anxiety, depression, and cognitive decline.

“Whether you have a hearing loss, are a friend or family member of someone with hearing loss, or you just want some help hearing in a noisy world, the Walk4Hearing offers something for you,” said HLAA executive director Barbara Kelley.

Registration to join the a walk is free at walk4hearing.org. 
Published: 4/30/2018 7:17:00 PM


from #Audiology via ola Kala on Inoreader https://ift.tt/2JSWZNr
via IFTTT

CEP250 mutations associated with mild cone-rod dystrophy and sensorineural hearing loss in a Japanese family.

Related Articles

CEP250 mutations associated with mild cone-rod dystrophy and sensorineural hearing loss in a Japanese family.

Ophthalmic Genet. 2018 May 02;:1-8

Authors: Kubota D, Gocho K, Kikuchi S, Akeo K, Miura M, Yamaki K, Takahashi H, Kameya S

Abstract
BACKGROUND: CEP250 encodes the C-Nap1 protein which belongs to the CEP family of proteins. C-Nap1 has been reported to be expressed in the photoreceptor cilia and is known to interact with other ciliary proteins. Mutations of CEP250 cause atypical Usher syndrome which is characterized by early-onset sensorineural hearing loss (SNHL) and a relatively mild retinitis pigmentosa. This study tested the hypothesis that the mild cone-rod dystrophy (CRD) and SNHL in a non-consanguineous Japanese family was caused by CEP250 mutations.
METHODS: Detailed ophthalmic and auditory examinations were performed on the proband and her family members. Whole exome sequencing (WES) was used on the DNA obtained from the proband.
RESULTS: Electrophysiological analysis revealed a mild CRD in two family members. Adaptive optics (AO) imaging showed reduced cone density around the fovea. Auditory examinations showed a slight SNHL in both patients. WES of the proband identified compound heterozygous variants c.361C>T, p.R121*, and c.562C>T, p.R188* in CEP250. The variants were found to co-segregate with the disease in five members of the family.
CONCLUSIONS: The variants of CEP250 are both null variants and according to American College of Medical Genetics and Genomics (ACMG) standards and guideline, these variants are classified into the very strong category (PVS1). The criteria for both alleles will be pathogenic. Our data indicate that mutations of CEP250 can cause mild CRD and SNHL in Japanese patients. Because the ophthalmological phenotypes were very mild, high-resolution retinal imaging analysis, such as AO, will be helpful in diagnosing CEP250-associated disease.

PMID: 29718797 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2Ku8SKC
via IFTTT

The clinical characteristics of patients with mitochondrial tRNA Leu(UUR)m.3243A > G mutation: Compared with type 1 diabetes and early onset type 2 diabetes.

https:--linkinghub.elsevier.com-ihub-ima Related Articles

The clinical characteristics of patients with mitochondrial tRNA Leu(UUR)m.3243A > G mutation: Compared with type 1 diabetes and early onset type 2 diabetes.

J Diabetes Complications. 2017 Aug;31(8):1354-1359

Authors: Zhu J, Yang P, Liu X, Yan L, Rampersad S, Li F, Li H, Sheng C, Cheng X, Zhang M, Qu S

Abstract
OBJECTIVE: This study presents nine patients with mitochondrial tRNA Leu (UUR) m.3243A>G mutation and compares the clinical characteristics and diabetes complications with type 1 diabetes (T1DM) or early onset type 2 diabetes (T2DM).
METHODS: The study covers 9 patients with MIDD, 33 patients with T1DM and 86 patients (age of onset ≤35years) with early onset T2DM, matched for sex, age at onset of diabetes, duration of diabetes. All patients with MIDD were confirmed as carrying the m.3243A>G mitochondrial DNA mutation. Serum HbA1c, beta-cell function, retinal and renal complications of diabetes, bone metabolic markers, lumbar spine and femoral neck BMD bone mineral density were compared to characterize the clinical features of all patients.
RESULTS: Nine patients were from five unrelated families, and the mean (SD) onset age of those patients was 31.2±7.2year. Two patients required insulin at presentation, and six patients progressed to insulin requirement after a mean of 7.2years. β-Cell function in the MIDD group was intermediate between T1DM and early-onset T2DM. In MIDD, four patients were diagnosed as diabetic retinopathy (4/9) and five patients (5/9) had macroalbuminuria. The number of patients with diabetic retinopathy and macroalbuminuria in the MIDD group was comparable to T1DM or early-onset T2DM. The rate of osteoporosis (BMD T-score<-2.5 SD) in the patient with MIDD was higher than the T1DM or early-onset T2DM group.
CONCLUSION: Our study indicates that of the nine subjects with MIDD, three patients (1-II-1, 1-II-3, 1-II-4) who came from the same family had a history of acute pancreatitis. Compared with T1DM or early-onset T2DM matched for sex, age, duration of diabetes, MIDD patients had the highest rate of osteoporosis.

PMID: 28599824 [PubMed - indexed for MEDLINE]



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2rebmac
via IFTTT

Development of a multimedia educational programme for first-time hearing aid users: a participatory design.

Related Articles

Development of a multimedia educational programme for first-time hearing aid users: a participatory design.

Int J Audiol. 2018 May 02;:1-10

Authors: Ferguson M, Leighton P, Brandreth M, Wharrad H

Abstract
OBJECTIVE: To develop content for a series of interactive video tutorials (or reusable learning objects, RLOs) for first-time adult hearing aid users, to enhance knowledge of hearing aids and communication.
DESIGN: RLO content was based on an electronically-delivered Delphi review, workshops, and iterative peer-review and feedback using a mixed-methods participatory approach.
STUDY SAMPLE: An expert panel of 33 hearing healthcare professionals, and workshops involving 32 hearing aid users and 11 audiologists. This ensured that social, emotional and practical experiences of the end-user alongside clinical validity were captured.
RESULTS: Content for evidence-based, self-contained RLOs based on pedagogical principles was developed for delivery via DVD for television, PC or internet. Content was developed based on Delphi review statements about essential information that reached consensus (≥90%), visual representations of relevant concepts relating to hearing aids and communication, and iterative peer-review and feedback of content.
CONCLUSIONS: This participatory approach recognises and involves key stakeholders in the design process to create content for a user-friendly multimedia educational intervention, to supplement the clinical management of first-time hearing aid users. We propose participatory methodologies are used in the development of content for e-learning interventions in hearing-related research and clinical practice.

PMID: 29718733 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2HKm68u
via IFTTT

Development of a multimedia educational programme for first-time hearing aid users: a participatory design.

Related Articles

Development of a multimedia educational programme for first-time hearing aid users: a participatory design.

Int J Audiol. 2018 May 02;:1-10

Authors: Ferguson M, Leighton P, Brandreth M, Wharrad H

Abstract
OBJECTIVE: To develop content for a series of interactive video tutorials (or reusable learning objects, RLOs) for first-time adult hearing aid users, to enhance knowledge of hearing aids and communication.
DESIGN: RLO content was based on an electronically-delivered Delphi review, workshops, and iterative peer-review and feedback using a mixed-methods participatory approach.
STUDY SAMPLE: An expert panel of 33 hearing healthcare professionals, and workshops involving 32 hearing aid users and 11 audiologists. This ensured that social, emotional and practical experiences of the end-user alongside clinical validity were captured.
RESULTS: Content for evidence-based, self-contained RLOs based on pedagogical principles was developed for delivery via DVD for television, PC or internet. Content was developed based on Delphi review statements about essential information that reached consensus (≥90%), visual representations of relevant concepts relating to hearing aids and communication, and iterative peer-review and feedback of content.
CONCLUSIONS: This participatory approach recognises and involves key stakeholders in the design process to create content for a user-friendly multimedia educational intervention, to supplement the clinical management of first-time hearing aid users. We propose participatory methodologies are used in the development of content for e-learning interventions in hearing-related research and clinical practice.

PMID: 29718733 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2HKm68u
via IFTTT

Development of a multimedia educational programme for first-time hearing aid users: a participatory design.

Related Articles

Development of a multimedia educational programme for first-time hearing aid users: a participatory design.

Int J Audiol. 2018 May 02;:1-10

Authors: Ferguson M, Leighton P, Brandreth M, Wharrad H

Abstract
OBJECTIVE: To develop content for a series of interactive video tutorials (or reusable learning objects, RLOs) for first-time adult hearing aid users, to enhance knowledge of hearing aids and communication.
DESIGN: RLO content was based on an electronically-delivered Delphi review, workshops, and iterative peer-review and feedback using a mixed-methods participatory approach.
STUDY SAMPLE: An expert panel of 33 hearing healthcare professionals, and workshops involving 32 hearing aid users and 11 audiologists. This ensured that social, emotional and practical experiences of the end-user alongside clinical validity were captured.
RESULTS: Content for evidence-based, self-contained RLOs based on pedagogical principles was developed for delivery via DVD for television, PC or internet. Content was developed based on Delphi review statements about essential information that reached consensus (≥90%), visual representations of relevant concepts relating to hearing aids and communication, and iterative peer-review and feedback of content.
CONCLUSIONS: This participatory approach recognises and involves key stakeholders in the design process to create content for a user-friendly multimedia educational intervention, to supplement the clinical management of first-time hearing aid users. We propose participatory methodologies are used in the development of content for e-learning interventions in hearing-related research and clinical practice.

PMID: 29718733 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2HKm68u
via IFTTT

Development of a multimedia educational programme for first-time hearing aid users: a participatory design.

Related Articles

Development of a multimedia educational programme for first-time hearing aid users: a participatory design.

Int J Audiol. 2018 May 02;:1-10

Authors: Ferguson M, Leighton P, Brandreth M, Wharrad H

Abstract
OBJECTIVE: To develop content for a series of interactive video tutorials (or reusable learning objects, RLOs) for first-time adult hearing aid users, to enhance knowledge of hearing aids and communication.
DESIGN: RLO content was based on an electronically-delivered Delphi review, workshops, and iterative peer-review and feedback using a mixed-methods participatory approach.
STUDY SAMPLE: An expert panel of 33 hearing healthcare professionals, and workshops involving 32 hearing aid users and 11 audiologists. This ensured that social, emotional and practical experiences of the end-user alongside clinical validity were captured.
RESULTS: Content for evidence-based, self-contained RLOs based on pedagogical principles was developed for delivery via DVD for television, PC or internet. Content was developed based on Delphi review statements about essential information that reached consensus (≥90%), visual representations of relevant concepts relating to hearing aids and communication, and iterative peer-review and feedback of content.
CONCLUSIONS: This participatory approach recognises and involves key stakeholders in the design process to create content for a user-friendly multimedia educational intervention, to supplement the clinical management of first-time hearing aid users. We propose participatory methodologies are used in the development of content for e-learning interventions in hearing-related research and clinical practice.

PMID: 29718733 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2HKm68u
via IFTTT