Σάββατο 8 Δεκεμβρίου 2018

Development of the Cochlear Implant Quality of Life Item Bank

Objectives: Functional outcomes following cochlear implantation have traditionally been focused on word and sentence recognition, which, although important, do not capture the varied communication and other experiences of adult cochlear implant (CI) users. Although the inadequacies of speech recognition to quantify CI user benefits are widely acknowledged, rarely have adult CI user outcomes been comprehensively assessed beyond these conventional measures. An important limitation in addressing this knowledge gap is that patient-reported outcome measures have not been developed and validated in adult CI patients using rigorous scientific methods. The purpose of the present study is to build on our previous work and create an item bank that can be used to develop new patient-reported outcome measures that assess CI quality of life (QOL) in the adult CI population. Design: An online questionnaire was made available to 500 adult CI users who represented the adult CI population and were recruited through a consortium of 20 CI centers in the United States. The questionnaire included the 101 question CIQOL item pool and additional questions related to demographics, hearing and CI history, and speech recognition scores. In accordance with the Patient-Reported Outcomes Measurement Information System, responses were psychometrically analyzed using confirmatory factor analysis and item response theory. Results: Of the 500 questionnaires sent, 371 (74.2%) subjects completed the questionnaire. Subjects represented the full range of age, durations of CI use, speech recognition abilities, and listening modalities of the adult CI population; subjects were implanted with each of the three CI manufacturers’ devices. The initial item pool consisted of the following domain constructs: communication, emotional, entertainment, environment, independence, listening effort, and social. Through psychometric analysis, after removing locally dependent and misfitting items, all of the domains were found to have sound psychometric properties, with the exception of the independence domain. This resulted in a final CIQOL item bank of 81 items in 6 domains with good psychometric properties. Conclusions: Our findings reveal that hypothesis-driven quantitative analyses result in a psychometrically sound CIQOL item bank, organized into unique domains comprised of independent items which measure the full ability range of the adult CI population. The final item bank will now be used to develop new instruments that evaluate and differentiate adult CIQOL across the patient ability spectrum. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The Cochlear Implant Quality of Life Development Consortium collaborators consists of the following institutions (and individuals): University of Cincinnati (Ravi N. Samy, MD), University of Colorado (Samuel P. Gubbels, MD), Columbia University (Justin S. Golub, MD MS), House Ear Clinic (Eric P. Wilkinson, MD; Dawna Mills, AuD), Johns Hopkins University (John P. Carey, MD), Kaiser Permanente-Los Angeles (Nopawan Vorasubin, MD), Kaiser Permanente-San Diego (Vickie Brunk, AuD), Mayo University Rochester (Matthew L. Carlson, MD; Collin L. Driscoll, MD; Douglas P. Sladen, PhD), Medical University of South Carolina (Elizabeth L. Camposeo, AuD; Meredith A. Holcomb AuD; Paul R. Lambert, MD; Ted A. Meyer, MD, PhD; Cameron Thomas, BS), Ohio State University (Aaron C. Moberly, MD), Stanford University (Nikolas H. Blevins, MD; Jannine B. Larky, MA), University of Maryland (Ronna P. Herzano, MD, PhD), University of Miami (Michael E. Hoffer, MD; Sandra M. Prentiss, PhD), University of Pennsylvania (Jason Brant, MD), University of Texas Southwestern (Jacob B. Hunter, MD; Brandon Isaacson, MD; J. Walter Kutz, MD), University of Utah (Richard K. Gurgel, MD), Virginia Mason Medical Center (Daniel M. Zeitler, MD), Washington University-Saint Louis (Craig A. Buchman, MD; Jill B. Firszt, PhD); Vanderbilt University (Rene H. Gifford, PhD; David S. Haynes, MD; Robert F. Labadie, MD, PhD). This research was made possible by funding from a K12 award through the South Carolina Clinical & Translational Research (SCTR) Institute, with an academic home at the Medical University of South Carolina, National Institutes of Health/National Center for Advancing Translational Sciences Grant Number UL1TR001450, a grant from the American Cochlear Implant Alliance, and a grant from the Doris Duke Charitable Foundation. Address for correspondence: Theodore R. McRackan, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 550, Office No. 1120 Rutledge Tower, Charleston, SC 29425, USA. E-mail: mcrackan@musc.edu. Received April 26, 2018; accepted October 15, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Ebm2eL
via IFTTT

How Do You Deal With Uncertainty? Cochlear Implant Users Differ in the Dynamics of Lexical Processing of Noncanonical Inputs

Objectives: Work in normal-hearing (NH) adults suggests that spoken language processing involves coping with ambiguity. Even a clearly spoken word contains brief periods of ambiguity as it unfolds over time, and early portions will not be sufficient to uniquely identify the word. However, beyond this temporary ambiguity, NH listeners must also cope with the loss of information due to reduced forms, dialect, and other factors. A recent study suggests that NH listeners may adapt to increased ambiguity by changing the dynamics of how they commit to candidates at a lexical level. Cochlear implant (CI) users must also frequently deal with highly degraded input, in which there is less information available in the input to recover a target word. The authors asked here whether their frequent experience with this leads to lexical dynamics that are better suited for coping with uncertainty. Design: Listeners heard words either correctly pronounced (dog) or mispronounced at onset (gog) or offset (dob). Listeners selected the corresponding picture from a screen containing pictures of the target and three unrelated items. While they did this, fixations to each object were tracked as a measure of the time course of identifying the target. The authors tested 44 postlingually deafened adult CI users in 2 groups (23 used standard electric only configurations, and 21 supplemented the CI with a hearing aid), along with 28 age-matched age-typical hearing (ATH) controls. Results: All three groups recognized the target word accurately, though each showed a small decrement for mispronounced forms (larger in both types of CI users). Analysis of fixations showed a close time locking to the timing of the mispronunciation. Onset mispronunciations delayed initial fixations to the target, but fixations to the target showed partial recovery by the end of the trial. Offset mispronunciations showed no effect early, but suppressed looking later. This pattern was attested in all three groups, though both types of CI users were slower and did not commit fully to the target. When the authors quantified the degree of disruption (by the mispronounced forms), they found that both groups of CI users showed less disruption than ATH listeners during the first 900 msec of processing. Finally, an individual differences analysis showed that within the CI users, the dynamics of fixations predicted speech perception outcomes over and above accuracy in this task and that CI users with the more rapid fixation patterns of ATH listeners showed better outcomes. Conclusions: Postlingually deafened CI users process speech incrementally (as do ATH listeners), though they commit more slowly and less strongly to a single item than do ATH listeners. This may allow them to cope more flexible with mispronunciations. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Hannah Rigler and Claire Goodwin for assistance with data collection and management, Camille Dunn for assistance with participant recruitment, and Bruce Gantz for support of the overall project. This project was funded by NIH Grants DC008089 awarded to B.M. and DC000242 awarded to Bruce Gantz and B.M. B.M. and K.S.A. conceptualized, designed, and implemented the study. B.M., T.P.E., and K.S.A. analyzed the results. B.M. and K.S.A. wrote the manuscript which all three authors extensively discussed and edited. The authors have no conflicts of interest to disclose. Address for correspondence: Bob McMurray, Department of Psychological and Brain Sciences, University of Iowa, W314 SSH, Iowa City, IA 52242, USA. E-mail: Bob-mcmurray@uiowa.edu Received August 7, 2017; accepted September 10, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Em3Ccs
via IFTTT

Medical Referral Patterns and Etiologies for Children With Mild-to-Severe Hearing Loss

Objectives: To (1) identify the etiologies and risk factors of the patient cohort and determine the degree to which they reflected the incidence for children with hearing loss and (2) quantify practice management patterns in three catchment areas of the United States with available centers of excellence in pediatric hearing loss. Design: Medical information for 307 children with bilateral, mild-to-severe hearing loss was examined retrospectively. Children were participants in the Outcomes of Children with Hearing Loss (OCHL) study, a 5-year longitudinal study that recruited subjects at three different sites. Children aged 6 months to 7 years at time of OCHL enrollment were participants in this study. Children with cochlear implants, children with severe or profound hearing loss, and children with significant cognitive or motor delays were excluded from the OCHL study and, by extension, from this analysis. Medical information was gathered using medical records and participant intake forms, the latter reflecting a caregiver’s report. A comparison group included 134 children with normal hearing. A Chi-square test on two-way tables was used to assess for differences in referral patterns by site for the children who are hard of hearing (CHH). Linear regression was performed on gestational age and birth weight as continuous variables. Risk factors were assessed using t tests. The alpha value was set at p

from #Audiology via ola Kala on Inoreader https://ift.tt/2EblXHZ
via IFTTT

Development of the Cochlear Implant Quality of Life Item Bank

Objectives: Functional outcomes following cochlear implantation have traditionally been focused on word and sentence recognition, which, although important, do not capture the varied communication and other experiences of adult cochlear implant (CI) users. Although the inadequacies of speech recognition to quantify CI user benefits are widely acknowledged, rarely have adult CI user outcomes been comprehensively assessed beyond these conventional measures. An important limitation in addressing this knowledge gap is that patient-reported outcome measures have not been developed and validated in adult CI patients using rigorous scientific methods. The purpose of the present study is to build on our previous work and create an item bank that can be used to develop new patient-reported outcome measures that assess CI quality of life (QOL) in the adult CI population. Design: An online questionnaire was made available to 500 adult CI users who represented the adult CI population and were recruited through a consortium of 20 CI centers in the United States. The questionnaire included the 101 question CIQOL item pool and additional questions related to demographics, hearing and CI history, and speech recognition scores. In accordance with the Patient-Reported Outcomes Measurement Information System, responses were psychometrically analyzed using confirmatory factor analysis and item response theory. Results: Of the 500 questionnaires sent, 371 (74.2%) subjects completed the questionnaire. Subjects represented the full range of age, durations of CI use, speech recognition abilities, and listening modalities of the adult CI population; subjects were implanted with each of the three CI manufacturers’ devices. The initial item pool consisted of the following domain constructs: communication, emotional, entertainment, environment, independence, listening effort, and social. Through psychometric analysis, after removing locally dependent and misfitting items, all of the domains were found to have sound psychometric properties, with the exception of the independence domain. This resulted in a final CIQOL item bank of 81 items in 6 domains with good psychometric properties. Conclusions: Our findings reveal that hypothesis-driven quantitative analyses result in a psychometrically sound CIQOL item bank, organized into unique domains comprised of independent items which measure the full ability range of the adult CI population. The final item bank will now be used to develop new instruments that evaluate and differentiate adult CIQOL across the patient ability spectrum. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The Cochlear Implant Quality of Life Development Consortium collaborators consists of the following institutions (and individuals): University of Cincinnati (Ravi N. Samy, MD), University of Colorado (Samuel P. Gubbels, MD), Columbia University (Justin S. Golub, MD MS), House Ear Clinic (Eric P. Wilkinson, MD; Dawna Mills, AuD), Johns Hopkins University (John P. Carey, MD), Kaiser Permanente-Los Angeles (Nopawan Vorasubin, MD), Kaiser Permanente-San Diego (Vickie Brunk, AuD), Mayo University Rochester (Matthew L. Carlson, MD; Collin L. Driscoll, MD; Douglas P. Sladen, PhD), Medical University of South Carolina (Elizabeth L. Camposeo, AuD; Meredith A. Holcomb AuD; Paul R. Lambert, MD; Ted A. Meyer, MD, PhD; Cameron Thomas, BS), Ohio State University (Aaron C. Moberly, MD), Stanford University (Nikolas H. Blevins, MD; Jannine B. Larky, MA), University of Maryland (Ronna P. Herzano, MD, PhD), University of Miami (Michael E. Hoffer, MD; Sandra M. Prentiss, PhD), University of Pennsylvania (Jason Brant, MD), University of Texas Southwestern (Jacob B. Hunter, MD; Brandon Isaacson, MD; J. Walter Kutz, MD), University of Utah (Richard K. Gurgel, MD), Virginia Mason Medical Center (Daniel M. Zeitler, MD), Washington University-Saint Louis (Craig A. Buchman, MD; Jill B. Firszt, PhD); Vanderbilt University (Rene H. Gifford, PhD; David S. Haynes, MD; Robert F. Labadie, MD, PhD). This research was made possible by funding from a K12 award through the South Carolina Clinical & Translational Research (SCTR) Institute, with an academic home at the Medical University of South Carolina, National Institutes of Health/National Center for Advancing Translational Sciences Grant Number UL1TR001450, a grant from the American Cochlear Implant Alliance, and a grant from the Doris Duke Charitable Foundation. Address for correspondence: Theodore R. McRackan, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 550, Office No. 1120 Rutledge Tower, Charleston, SC 29425, USA. E-mail: mcrackan@musc.edu. Received April 26, 2018; accepted October 15, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Ebm2eL
via IFTTT

How Do You Deal With Uncertainty? Cochlear Implant Users Differ in the Dynamics of Lexical Processing of Noncanonical Inputs

Objectives: Work in normal-hearing (NH) adults suggests that spoken language processing involves coping with ambiguity. Even a clearly spoken word contains brief periods of ambiguity as it unfolds over time, and early portions will not be sufficient to uniquely identify the word. However, beyond this temporary ambiguity, NH listeners must also cope with the loss of information due to reduced forms, dialect, and other factors. A recent study suggests that NH listeners may adapt to increased ambiguity by changing the dynamics of how they commit to candidates at a lexical level. Cochlear implant (CI) users must also frequently deal with highly degraded input, in which there is less information available in the input to recover a target word. The authors asked here whether their frequent experience with this leads to lexical dynamics that are better suited for coping with uncertainty. Design: Listeners heard words either correctly pronounced (dog) or mispronounced at onset (gog) or offset (dob). Listeners selected the corresponding picture from a screen containing pictures of the target and three unrelated items. While they did this, fixations to each object were tracked as a measure of the time course of identifying the target. The authors tested 44 postlingually deafened adult CI users in 2 groups (23 used standard electric only configurations, and 21 supplemented the CI with a hearing aid), along with 28 age-matched age-typical hearing (ATH) controls. Results: All three groups recognized the target word accurately, though each showed a small decrement for mispronounced forms (larger in both types of CI users). Analysis of fixations showed a close time locking to the timing of the mispronunciation. Onset mispronunciations delayed initial fixations to the target, but fixations to the target showed partial recovery by the end of the trial. Offset mispronunciations showed no effect early, but suppressed looking later. This pattern was attested in all three groups, though both types of CI users were slower and did not commit fully to the target. When the authors quantified the degree of disruption (by the mispronounced forms), they found that both groups of CI users showed less disruption than ATH listeners during the first 900 msec of processing. Finally, an individual differences analysis showed that within the CI users, the dynamics of fixations predicted speech perception outcomes over and above accuracy in this task and that CI users with the more rapid fixation patterns of ATH listeners showed better outcomes. Conclusions: Postlingually deafened CI users process speech incrementally (as do ATH listeners), though they commit more slowly and less strongly to a single item than do ATH listeners. This may allow them to cope more flexible with mispronunciations. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Hannah Rigler and Claire Goodwin for assistance with data collection and management, Camille Dunn for assistance with participant recruitment, and Bruce Gantz for support of the overall project. This project was funded by NIH Grants DC008089 awarded to B.M. and DC000242 awarded to Bruce Gantz and B.M. B.M. and K.S.A. conceptualized, designed, and implemented the study. B.M., T.P.E., and K.S.A. analyzed the results. B.M. and K.S.A. wrote the manuscript which all three authors extensively discussed and edited. The authors have no conflicts of interest to disclose. Address for correspondence: Bob McMurray, Department of Psychological and Brain Sciences, University of Iowa, W314 SSH, Iowa City, IA 52242, USA. E-mail: Bob-mcmurray@uiowa.edu Received August 7, 2017; accepted September 10, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Em3Ccs
via IFTTT

Medical Referral Patterns and Etiologies for Children With Mild-to-Severe Hearing Loss

Objectives: To (1) identify the etiologies and risk factors of the patient cohort and determine the degree to which they reflected the incidence for children with hearing loss and (2) quantify practice management patterns in three catchment areas of the United States with available centers of excellence in pediatric hearing loss. Design: Medical information for 307 children with bilateral, mild-to-severe hearing loss was examined retrospectively. Children were participants in the Outcomes of Children with Hearing Loss (OCHL) study, a 5-year longitudinal study that recruited subjects at three different sites. Children aged 6 months to 7 years at time of OCHL enrollment were participants in this study. Children with cochlear implants, children with severe or profound hearing loss, and children with significant cognitive or motor delays were excluded from the OCHL study and, by extension, from this analysis. Medical information was gathered using medical records and participant intake forms, the latter reflecting a caregiver’s report. A comparison group included 134 children with normal hearing. A Chi-square test on two-way tables was used to assess for differences in referral patterns by site for the children who are hard of hearing (CHH). Linear regression was performed on gestational age and birth weight as continuous variables. Risk factors were assessed using t tests. The alpha value was set at p

from #Audiology via ola Kala on Inoreader https://ift.tt/2EblXHZ
via IFTTT

Huawei Introduces App to Translate Story Books into Sign Language

StorySign.jpgHuawei (https://www.huawei.com/us/) has launched StorySign, an app that uses the company's AI technology to translate children's books into sign language page by page, in Europe to help deaf children learn to read. Created in conjunction with experts and charities from the deaf community including the European Union of the Deaf and the British Deaf Association, StorySign contains two features that optimize the reading experience for deaf children: image recognition and optical character recognition. Image recognition allows children to position the phone at an angle from the book and the app will still recognize the words perfectly, while optical character recognition allows the app to function with greater accuracy. The AI performance will also power the speed at which pages from the book load in the app, so children won't be left waiting too long to find out what happens next in the story. The app currently showcases the popular children's book Where's Spot? by Eric Hill. StorySign can be downloaded for free from the Google Play Store and the Huawei AppGallery in 10 European countries. 


Published: 12/7/2018 2:55:00 PM


from #Audiology via ola Kala on Inoreader https://ift.tt/2PqylG8
via IFTTT

Huawei Introduces App to Translate Story Books into Sign Language

StorySign.jpgHuawei (https://www.huawei.com/us/) has launched StorySign, an app that uses the company's AI technology to translate children's books into sign language page by page, in Europe to help deaf children learn to read. Created in conjunction with experts and charities from the deaf community including the European Union of the Deaf and the British Deaf Association, StorySign contains two features that optimize the reading experience for deaf children: image recognition and optical character recognition. Image recognition allows children to position the phone at an angle from the book and the app will still recognize the words perfectly, while optical character recognition allows the app to function with greater accuracy. The AI performance will also power the speed at which pages from the book load in the app, so children won't be left waiting too long to find out what happens next in the story. The app currently showcases the popular children's book Where's Spot? by Eric Hill. StorySign can be downloaded for free from the Google Play Store and the Huawei AppGallery in 10 European countries. 


Published: 12/7/2018 2:55:00 PM


from #Audiology via ola Kala on Inoreader https://ift.tt/2PqylG8
via IFTTT

Development of oral sensory-motor functions of preterm and low-birth-weight newborns under speech-language pathology care

Publication date: Available online 7 December 2018

Source: Revista de Logopedia, Foniatría y Audiología

Author(s): Flaviana de Souza Cardoso, Danielle Xavier Pereira, Dyego Leandro Bezerra de Souza, Renata Veiga Andersen Cavalcanti

Abstract
Introduction

Preterm and low-birth-weight newborns may present immaturity in the functions of sucking, swallowing and breathing, speech therapists inserted in the hospital focus on the development of newborns’ oral sensorimotor system, promoting a safe transition from tube feeding to breastfeeding and contributing to improving the quality of life of the child population. The present study aimed to analyze the development of oral functions, oral feeding transition time and breastfeeding of preterm and low-birth-weight newborns under Speech-Language Pathology care.

Methods

A prognostic study carried out at a maternity hospital, based on the data collected from 121 filed medical records of newborns attended between September 2015 to July 2017. The Kaplan–Meier method, the Log Rank test and the Pearson correlation test were used for data analysis, considering a significance level of 0.05 (95%).

Results

It was observed that the lower the gestational age and the birth weight of newborns, the more speech therapy services were required until the establishment of exclusive OF; also, the transition time and the average time of using the orogastric tube were inversely proportional to the gestational age at birth. The non-nutritive sucking technique was the most used for stimulation, and 78.5% of the NBs were discharged from the hospital on exclusive breastfeeding.

Conclusion

Moderate to late preterm and low-birth-weight newborns are able to more quickly acquire the oral sensorimotor system functional pattern, and there are indications that Speech-Language Pathology care reduces the transition time to oral feeding, thus increasing the success rate of exclusive breastfeeding.

Resumen
Introducción

Los recién nacidos (RN) prematuros y con bajo peso pueden presentar inmadurez en las funciones de succión, deglución y respiración. Los logopedas en los hospitales trabajan el desarrollo del sistema sensoriomotor oral de los RN, promoviendo una transición segura de la sonda a la lactancia materna, lo que contribuye a mejorar la calidad de vida de la población infantil. El objetivo del presente estudio era analizar el desarrollo de las funciones orales, el tiempo de transición a la alimentación por vía oral (VO) y la lactancia materna de los RN prematuros, y con bajo peso al nacer bajo el cuidado fonoaudiológico.

Métodos

Este estudio de pronóstico se llevó a cabo en una maternidad, con base en la recopilación de datos en prontuarios archivados de 121 RN atendidos entre septiembre de 2015 y julio de 2017. En el análisis de los datos se aplicó el método de Kaplan-Meier, el test log-rank y la prueba de correlación de Pearson. Se consideró un nivel de significancia de 0,05 (95%).

Resultados

Se observó que cuanto menores eran la edad gestacional y el peso de nacimiento del RN, más necesidad de servicios fonoaudiológicos había hasta el establecimiento de la VO exclusiva, y el tiempo de transición y el tiempo promedio de utilización de la sonda orogástrica eran inversamente proporcionales a la edad gestacional al nacimiento. La técnica de succión no nutritiva fue la más utilizada para la estimulación, y el 78,5% de los RN recibieron el alta hospitalaria con lactancia exclusiva.

Conclusión

Los RN prematuros de moderados a tardíos y de bajo peso son capaces de adquirir el patrón funcional del sistema sensoriomotor oral con mayor rapidez, y hay indicios de que la atención fonoaudiológica reduce el tiempo de transición alimentaria a la VO, y aumenta la tasa de éxito de lactancia materna exclusiva.



from #Audiology via ola Kala on Inoreader https://ift.tt/2B59t1p
via IFTTT

Development of oral sensory-motor functions of preterm and low-birth-weight newborns under speech-language pathology care

Publication date: Available online 7 December 2018

Source: Revista de Logopedia, Foniatría y Audiología

Author(s): Flaviana de Souza Cardoso, Danielle Xavier Pereira, Dyego Leandro Bezerra de Souza, Renata Veiga Andersen Cavalcanti

Abstract
Introduction

Preterm and low-birth-weight newborns may present immaturity in the functions of sucking, swallowing and breathing, speech therapists inserted in the hospital focus on the development of newborns’ oral sensorimotor system, promoting a safe transition from tube feeding to breastfeeding and contributing to improving the quality of life of the child population. The present study aimed to analyze the development of oral functions, oral feeding transition time and breastfeeding of preterm and low-birth-weight newborns under Speech-Language Pathology care.

Methods

A prognostic study carried out at a maternity hospital, based on the data collected from 121 filed medical records of newborns attended between September 2015 to July 2017. The Kaplan–Meier method, the Log Rank test and the Pearson correlation test were used for data analysis, considering a significance level of 0.05 (95%).

Results

It was observed that the lower the gestational age and the birth weight of newborns, the more speech therapy services were required until the establishment of exclusive OF; also, the transition time and the average time of using the orogastric tube were inversely proportional to the gestational age at birth. The non-nutritive sucking technique was the most used for stimulation, and 78.5% of the NBs were discharged from the hospital on exclusive breastfeeding.

Conclusion

Moderate to late preterm and low-birth-weight newborns are able to more quickly acquire the oral sensorimotor system functional pattern, and there are indications that Speech-Language Pathology care reduces the transition time to oral feeding, thus increasing the success rate of exclusive breastfeeding.

Resumen
Introducción

Los recién nacidos (RN) prematuros y con bajo peso pueden presentar inmadurez en las funciones de succión, deglución y respiración. Los logopedas en los hospitales trabajan el desarrollo del sistema sensoriomotor oral de los RN, promoviendo una transición segura de la sonda a la lactancia materna, lo que contribuye a mejorar la calidad de vida de la población infantil. El objetivo del presente estudio era analizar el desarrollo de las funciones orales, el tiempo de transición a la alimentación por vía oral (VO) y la lactancia materna de los RN prematuros, y con bajo peso al nacer bajo el cuidado fonoaudiológico.

Métodos

Este estudio de pronóstico se llevó a cabo en una maternidad, con base en la recopilación de datos en prontuarios archivados de 121 RN atendidos entre septiembre de 2015 y julio de 2017. En el análisis de los datos se aplicó el método de Kaplan-Meier, el test log-rank y la prueba de correlación de Pearson. Se consideró un nivel de significancia de 0,05 (95%).

Resultados

Se observó que cuanto menores eran la edad gestacional y el peso de nacimiento del RN, más necesidad de servicios fonoaudiológicos había hasta el establecimiento de la VO exclusiva, y el tiempo de transición y el tiempo promedio de utilización de la sonda orogástrica eran inversamente proporcionales a la edad gestacional al nacimiento. La técnica de succión no nutritiva fue la más utilizada para la estimulación, y el 78,5% de los RN recibieron el alta hospitalaria con lactancia exclusiva.

Conclusión

Los RN prematuros de moderados a tardíos y de bajo peso son capaces de adquirir el patrón funcional del sistema sensoriomotor oral con mayor rapidez, y hay indicios de que la atención fonoaudiológica reduce el tiempo de transición alimentaria a la VO, y aumenta la tasa de éxito de lactancia materna exclusiva.



from #Audiology via ola Kala on Inoreader https://ift.tt/2B59t1p
via IFTTT

Huawei Introduces App to Translate Story Books into Sign Language

StorySign.jpgHuawei (https://www.huawei.com/us/) has launched StorySign, an app that uses the company's AI technology to translate children's books into sign language page by page, in Europe to help deaf children learn to read. Created in conjunction with experts and charities from the deaf community including the European Union of the Deaf and the British Deaf Association, StorySign contains two features that optimize the reading experience for deaf children: image recognition and optical character recognition. Image recognition allows children to position the phone at an angle from the book and the app will still recognize the words perfectly, while optical character recognition allows the app to function with greater accuracy. The AI performance will also power the speed at which pages from the book load in the app, so children won't be left waiting too long to find out what happens next in the story. The app currently showcases the popular children's book Where's Spot? by Eric Hill. StorySign can be downloaded for free from the Google Play Store and the Huawei AppGallery in 10 European countries. 


Published: 12/7/2018 2:55:00 PM


from #Audiology via ola Kala on Inoreader https://ift.tt/2QlCZui
via IFTTT

Huawei Introduces App to Translate Story Books into Sign Language

StorySign.jpgHuawei (https://www.huawei.com/us/) has launched StorySign, an app that uses the company's AI technology to translate children's books into sign language page by page, in Europe to help deaf children learn to read. Created in conjunction with experts and charities from the deaf community including the European Union of the Deaf and the British Deaf Association, StorySign contains two features that optimize the reading experience for deaf children: image recognition and optical character recognition. Image recognition allows children to position the phone at an angle from the book and the app will still recognize the words perfectly, while optical character recognition allows the app to function with greater accuracy. The AI performance will also power the speed at which pages from the book load in the app, so children won't be left waiting too long to find out what happens next in the story. The app currently showcases the popular children's book Where's Spot? by Eric Hill. StorySign can be downloaded for free from the Google Play Store and the Huawei AppGallery in 10 European countries. 


Published: 12/7/2018 2:55:00 PM


from #Audiology via ola Kala on Inoreader https://ift.tt/2QlCZui
via IFTTT