Παρασκευή 9 Νοεμβρίου 2018

Evaluation of a New Algorithm to Optimize Audibility in Cochlear Implant Recipients

Objectives: A positive relation between audibility and speech understanding has been established for cochlear implant (CI) recipients. Sound field thresholds of 20 dB HL across the frequency range provide CI users the opportunity to understand soft and very soft speech. However, programming the sound processor to attain good audibility can be time-consuming and difficult for some patients. To address these issues, Advanced Bionics (AB) developed the SoftVoice algorithm designed to remove system noise and thereby improve audibility of soft speech. The present study aimed to evaluate the efficacy of SoftVoice in optimizing AB CI recipients’ soft-speech perception. Design: Two studies were conducted. Study 1 had two phases, 1A and 1B. Sixteen adult, AB CI recipients participated in Study 1A. Acute testing was performed in the unilateral CI condition using a Harmony processor programmed with participants’ everyday-use program (Everyday) and that same program but with SoftVoice implemented. Speech recognition measures were administered at several presentation levels in quiet (35 to 60 dB SPL) and in noise (60 dB SPL). In Study 1B, 10 of the participants compared Everyday and SoftVoice at home to obtain feedback regarding the use of SoftVoice in various environments. During Study 2, soft-speech perception was acutely measured with Everyday and SoftVoice for 10 participants using the Naida CI Q70 processor. Results with the Harmony (Study 1A) and Naida processors were compared. Additionally, Study 2 evaluated programming options for setting electrode threshold levels (T-levels or Ts) to improve the usability of SoftVoice in daily life. Results: Study 1A showed significantly higher scores with SoftVoice than Everyday at soft presentation levels (35, 40, 45, and 50 dB SPL) and no significant differences between programs at a conversational level (60 dB SPL) in quiet or in noise. After take-home experience with SoftVoice and Everyday (Study 1B), 5 of 10 participants reported preferring SoftVoice over Everyday; however, 6 reported bothersome environmental sound when listening with SoftVoice at home. Results of Study 2 indicated similar soft-speech perception between Harmony and Naida processors. Additionally, implementing SoftVoice with Ts at the manufacturer’s default setting of 10% of Ms reduced reports of bothersome environmental sound during take-home experience; however, soft-speech perception was best with SoftVoice when Ts were behaviorally set above 10% of Ms. Conclusions: Results indicate that SoftVoice may be a potential tool for optimizing AB users’ audibility and, in turn, soft-speech perception. To achieve optimal performance at soft levels and comfortable use in daily environments, setting Ts must be considered with SoftVoice. Future research should examine program parameters that may benefit soft-speech perception when used in combination with SoftVoice (e.g., increased input dynamic range). ACKNOWLEDGMENTS: The authors express their appreciation to the participants who graciously gave their time and effort to participate in this study and to Chris Brenner who assisted with data entry and editing parts of the manuscript. This research was supported by funds from Advanced Bionics, LLC and from NIH/NIDCD R01DC009010. The authors L.K.H., J.B.F., R.M.R., A.L.S., and L.M.L. designed the study, analyzed and interpreted data, and contributed to the writing of the manuscript. L.K.H. and N.Y.D. collected the data. N.Y.D. also assisted with interpretation of the data and writing the manuscript. A.L.S. and L.M.L. are employed by Advanced Bionics, LLC. J.B.F. serves on the audiology advisory boards for Advanced Bionics, LLC and Cochlear Americas. Address for correspondence: Laura K. Holden, Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine, 4523 Clayton Avenue, Campus Box 8115, St. Louis, MO 63110. E-mail: laurakholden@wustl.edu Received March 23, 2018; accepted September 27, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Ozqq96
via IFTTT

Speech-in-Noise and Quality-of-Life Measures in School-Aged Children With Normal Hearing and With Unilateral Hearing Loss

Objectives: (1) Measure sentence recognition in co-located and spatially separated target and masker configurations in school-aged children with unilateral hearing loss (UHL) and with normal hearing (NH). (2) Compare self-reported hearing-related quality-of-life (QoL) scores in school-aged children with UHL and NH. Design: Listeners were school-aged children (6 to 12 yrs) with permanent UHL (n = 41) or NH (n = 35) and adults with NH (n = 23). Sentence reception thresholds (SRTs) were measured using Hearing In Noise Test–Children sentences in quiet and in the presence of 2-talker child babble or a speech-shaped noise masker in target/masker spatial configurations: 0/0, 0/−60, 0/+60, or 0/±60 degrees azimuth. Maskers were presented at a fixed level of 55 dBA, while the level of the target sentences varied adaptively to estimate the SRT. Hearing-related QoL was measured using the Hearing Environments and Reflection on Quality of Life (HEAR-QL-26) questionnaire for child subjects. Results: As a group, subjects with unaided UHL had higher (poorer) SRTs than age-matched peers with NH in all listening conditions. Effects of age, masker type, and spatial configuration of target and masker signals were found. Spatial release from masking was significantly reduced in conditions where the masker was directed toward UHL subjects’ normal-hearing ear. Hearing-related QoL scores were significantly poorer in subjects with UHL compared to those with NH. Degree of UHL, as measured by four-frequency pure-tone average, was significantly correlated with SRTs only in the two conditions where the masker was directed towards subjects’ normal-hearing ear, although the unaided Speech Intelligibility Index at 65 dB SPL was significantly correlated with SRTs in four conditions, some of which directed the masker to the impaired ear or both ears. Neither pure-tone average nor unaided Speech Intelligibility Index was correlated with QoL scores. Conclusions: As a group, school-aged children with UHL showed substantial reductions in masked speech perception and hearing-related QoL, irrespective of sex, laterality of hearing loss, and degree of hearing loss. While some children demonstrated normal or near-normal performance in certain listening conditions, a disproportionate number of thresholds fell in the poorest decile of the NH data. These findings add to the growing literature challenging the past assumption that one ear is “good enough.” ACKNOWLEDGMENTS: The authors are grateful to Kelsey Cappetta for her assistance with subject recruitment and data collection, Kevin Randall and Michael Rogers for their support with software development, Kosuke Kawai for his assistance with statistical support, Patrick Zurek for his helpful comments on an earlier version of this article, and most especially to all the children and families who graciously participated in this research project. The authors also acknowledge the generosity of the following funding sources, which contributed to the execution of the research project: National Institute on Deafness and Other Communication Disorders DC-01625, University of Massachusetts Amherst Graduate School, Boston Children’s Hospital Otolaryngology Foundation. Portions of this research were completed at the University of Massachusetts Amherst in partial fulfillment of the first author’s doctoral dissertation requirements. Preliminary results of this study were presented at the Annual Scientific and Technology Conference of the American Auditory Society, Scottsdale, AZ, March 2017. The authors declare no conflict of interest. Address for correspondence Amanda Griffin, Department of Otolaryngology and Communication Enhancement, Boston Children’s Hospital, 9 Hope Avenue, Waltham, MA 02453, USA. E-mail: Amanda.Griffin@childrens.harvard.edu Received April 23, 2018; accepted August 20, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2z0bNH6
via IFTTT

Evaluation of a New Algorithm to Optimize Audibility in Cochlear Implant Recipients

Objectives: A positive relation between audibility and speech understanding has been established for cochlear implant (CI) recipients. Sound field thresholds of 20 dB HL across the frequency range provide CI users the opportunity to understand soft and very soft speech. However, programming the sound processor to attain good audibility can be time-consuming and difficult for some patients. To address these issues, Advanced Bionics (AB) developed the SoftVoice algorithm designed to remove system noise and thereby improve audibility of soft speech. The present study aimed to evaluate the efficacy of SoftVoice in optimizing AB CI recipients’ soft-speech perception. Design: Two studies were conducted. Study 1 had two phases, 1A and 1B. Sixteen adult, AB CI recipients participated in Study 1A. Acute testing was performed in the unilateral CI condition using a Harmony processor programmed with participants’ everyday-use program (Everyday) and that same program but with SoftVoice implemented. Speech recognition measures were administered at several presentation levels in quiet (35 to 60 dB SPL) and in noise (60 dB SPL). In Study 1B, 10 of the participants compared Everyday and SoftVoice at home to obtain feedback regarding the use of SoftVoice in various environments. During Study 2, soft-speech perception was acutely measured with Everyday and SoftVoice for 10 participants using the Naida CI Q70 processor. Results with the Harmony (Study 1A) and Naida processors were compared. Additionally, Study 2 evaluated programming options for setting electrode threshold levels (T-levels or Ts) to improve the usability of SoftVoice in daily life. Results: Study 1A showed significantly higher scores with SoftVoice than Everyday at soft presentation levels (35, 40, 45, and 50 dB SPL) and no significant differences between programs at a conversational level (60 dB SPL) in quiet or in noise. After take-home experience with SoftVoice and Everyday (Study 1B), 5 of 10 participants reported preferring SoftVoice over Everyday; however, 6 reported bothersome environmental sound when listening with SoftVoice at home. Results of Study 2 indicated similar soft-speech perception between Harmony and Naida processors. Additionally, implementing SoftVoice with Ts at the manufacturer’s default setting of 10% of Ms reduced reports of bothersome environmental sound during take-home experience; however, soft-speech perception was best with SoftVoice when Ts were behaviorally set above 10% of Ms. Conclusions: Results indicate that SoftVoice may be a potential tool for optimizing AB users’ audibility and, in turn, soft-speech perception. To achieve optimal performance at soft levels and comfortable use in daily environments, setting Ts must be considered with SoftVoice. Future research should examine program parameters that may benefit soft-speech perception when used in combination with SoftVoice (e.g., increased input dynamic range). ACKNOWLEDGMENTS: The authors express their appreciation to the participants who graciously gave their time and effort to participate in this study and to Chris Brenner who assisted with data entry and editing parts of the manuscript. This research was supported by funds from Advanced Bionics, LLC and from NIH/NIDCD R01DC009010. The authors L.K.H., J.B.F., R.M.R., A.L.S., and L.M.L. designed the study, analyzed and interpreted data, and contributed to the writing of the manuscript. L.K.H. and N.Y.D. collected the data. N.Y.D. also assisted with interpretation of the data and writing the manuscript. A.L.S. and L.M.L. are employed by Advanced Bionics, LLC. J.B.F. serves on the audiology advisory boards for Advanced Bionics, LLC and Cochlear Americas. Address for correspondence: Laura K. Holden, Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine, 4523 Clayton Avenue, Campus Box 8115, St. Louis, MO 63110. E-mail: laurakholden@wustl.edu Received March 23, 2018; accepted September 27, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Ozqq96
via IFTTT

Speech-in-Noise and Quality-of-Life Measures in School-Aged Children With Normal Hearing and With Unilateral Hearing Loss

Objectives: (1) Measure sentence recognition in co-located and spatially separated target and masker configurations in school-aged children with unilateral hearing loss (UHL) and with normal hearing (NH). (2) Compare self-reported hearing-related quality-of-life (QoL) scores in school-aged children with UHL and NH. Design: Listeners were school-aged children (6 to 12 yrs) with permanent UHL (n = 41) or NH (n = 35) and adults with NH (n = 23). Sentence reception thresholds (SRTs) were measured using Hearing In Noise Test–Children sentences in quiet and in the presence of 2-talker child babble or a speech-shaped noise masker in target/masker spatial configurations: 0/0, 0/−60, 0/+60, or 0/±60 degrees azimuth. Maskers were presented at a fixed level of 55 dBA, while the level of the target sentences varied adaptively to estimate the SRT. Hearing-related QoL was measured using the Hearing Environments and Reflection on Quality of Life (HEAR-QL-26) questionnaire for child subjects. Results: As a group, subjects with unaided UHL had higher (poorer) SRTs than age-matched peers with NH in all listening conditions. Effects of age, masker type, and spatial configuration of target and masker signals were found. Spatial release from masking was significantly reduced in conditions where the masker was directed toward UHL subjects’ normal-hearing ear. Hearing-related QoL scores were significantly poorer in subjects with UHL compared to those with NH. Degree of UHL, as measured by four-frequency pure-tone average, was significantly correlated with SRTs only in the two conditions where the masker was directed towards subjects’ normal-hearing ear, although the unaided Speech Intelligibility Index at 65 dB SPL was significantly correlated with SRTs in four conditions, some of which directed the masker to the impaired ear or both ears. Neither pure-tone average nor unaided Speech Intelligibility Index was correlated with QoL scores. Conclusions: As a group, school-aged children with UHL showed substantial reductions in masked speech perception and hearing-related QoL, irrespective of sex, laterality of hearing loss, and degree of hearing loss. While some children demonstrated normal or near-normal performance in certain listening conditions, a disproportionate number of thresholds fell in the poorest decile of the NH data. These findings add to the growing literature challenging the past assumption that one ear is “good enough.” ACKNOWLEDGMENTS: The authors are grateful to Kelsey Cappetta for her assistance with subject recruitment and data collection, Kevin Randall and Michael Rogers for their support with software development, Kosuke Kawai for his assistance with statistical support, Patrick Zurek for his helpful comments on an earlier version of this article, and most especially to all the children and families who graciously participated in this research project. The authors also acknowledge the generosity of the following funding sources, which contributed to the execution of the research project: National Institute on Deafness and Other Communication Disorders DC-01625, University of Massachusetts Amherst Graduate School, Boston Children’s Hospital Otolaryngology Foundation. Portions of this research were completed at the University of Massachusetts Amherst in partial fulfillment of the first author’s doctoral dissertation requirements. Preliminary results of this study were presented at the Annual Scientific and Technology Conference of the American Auditory Society, Scottsdale, AZ, March 2017. The authors declare no conflict of interest. Address for correspondence Amanda Griffin, Department of Otolaryngology and Communication Enhancement, Boston Children’s Hospital, 9 Hope Avenue, Waltham, MA 02453, USA. E-mail: Amanda.Griffin@childrens.harvard.edu Received April 23, 2018; accepted August 20, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2z0bNH6
via IFTTT

A Multimethod Analysis of Pragmatic Skills in Children and Adolescents With Fragile X Syndrome, Autism Spectrum Disorder, and Down Syndrome

Purpose
Pragmatic language skills are often impaired above and beyond general language delays in individuals with neurodevelopmental disabilities. This study used a multimethod approach to language sample analysis to characterize syndrome- and sex-specific profiles across different neurodevelopmental disabilities and to examine the congruency of 2 analysis techniques.
Method
Pragmatic skills of young males and females with fragile X syndrome with autism spectrum disorder (FXS-ASD, n = 61) and without autism spectrum disorder (FXS-O, n = 40), Down syndrome (DS, n = 42), and typical development (TD, n = 37) and males with idiopathic autism spectrum disorder only (ASD-O, n = 29) were compared using variables obtained from a detailed hand-coding system contrasted with similar variables obtained automatically from the language analysis program Systematic Analysis of Language Transcripts (SALT).
Results
Noncontingent language and perseveration were characteristic of the pragmatic profiles of boys and girls with FXS-ASD and boys with ASD-O. Boys with ASD-O also initiated turns less often and were more nonresponsive than other groups, and girls with FXS-ASD were more nonresponsive than their male counterparts. Hand-coding and SALT methods were largely convergent with some exceptions.
Conclusion
Results suggest both similarities and differences in the pragmatic profiles observed across different neurodevelopmental disabilities, including idiopathic and FXS-associated cases of ASD, as well as an important sex difference in FXS-ASD. These findings and congruency between the 2 language sample analysis techniques together have important implications for assessment and intervention efforts.

from #Audiology via ola Kala on Inoreader https://ift.tt/2QstjKt
via IFTTT

A Multimethod Analysis of Pragmatic Skills in Children and Adolescents With Fragile X Syndrome, Autism Spectrum Disorder, and Down Syndrome

Purpose
Pragmatic language skills are often impaired above and beyond general language delays in individuals with neurodevelopmental disabilities. This study used a multimethod approach to language sample analysis to characterize syndrome- and sex-specific profiles across different neurodevelopmental disabilities and to examine the congruency of 2 analysis techniques.
Method
Pragmatic skills of young males and females with fragile X syndrome with autism spectrum disorder (FXS-ASD, n = 61) and without autism spectrum disorder (FXS-O, n = 40), Down syndrome (DS, n = 42), and typical development (TD, n = 37) and males with idiopathic autism spectrum disorder only (ASD-O, n = 29) were compared using variables obtained from a detailed hand-coding system contrasted with similar variables obtained automatically from the language analysis program Systematic Analysis of Language Transcripts (SALT).
Results
Noncontingent language and perseveration were characteristic of the pragmatic profiles of boys and girls with FXS-ASD and boys with ASD-O. Boys with ASD-O also initiated turns less often and were more nonresponsive than other groups, and girls with FXS-ASD were more nonresponsive than their male counterparts. Hand-coding and SALT methods were largely convergent with some exceptions.
Conclusion
Results suggest both similarities and differences in the pragmatic profiles observed across different neurodevelopmental disabilities, including idiopathic and FXS-associated cases of ASD, as well as an important sex difference in FXS-ASD. These findings and congruency between the 2 language sample analysis techniques together have important implications for assessment and intervention efforts.

from #Audiology via ola Kala on Inoreader https://ift.tt/2QstjKt
via IFTTT

Immediate effects of valgus knee bracing on tibiofemoral contact forces and knee muscle forces

Publication date: Available online 8 November 2018

Source: Gait & Posture

Author(s): Michelle Hall, Laura E. Diamond, Gavin K. Lenton, Claudio Pizzolato, David J. Saxby

Abstract
Background

Valgus knee braces have been reported to reduce the external knee adduction moment during walking. However, mechanistic investigations into the effects of valgus bracing on medial compartment contact forces using electromyogram-driven neuromusculoskeletal models are limited.

Research question

What are the immediate effects of valgus bracing on medial tibiofemoral contact forces and muscular loading of the tibiofemoral joint?

Methods

Sixteen (9 male) healthy adults (27.7 ± 4.4 years) performed 20 over-ground walking trials at self-selected speed both with and without an Ossür Unloader One® brace. Assessment order (i.e., with or without brace) was randomised and counterbalanced to prevent order effects. While walking, three-dimensional lower-body motion, ground reaction forces, and surface electromyograms from eight lower-limb muscles were acquired. These data were used to calibrate an electromyogram-driven neuromusculoskeletal model of muscle and tibiofemoral contact forces (N), from which muscle and external load contributions (%) to those contact forces were determined.

Results

Although walking with the brace resulted in no significant changes in peak tibiofemoral contact forces at the group-level, individual responses were variable and non-uniform. At the group-level, wearing the brace resulted in a 2.35% (95% CI 0.46-4.24; p = 0.02) greater relative contribution of muscle to lateral compartment contact loading (54.2 ± 11.1%) compared to not wearing the brace (51.8 ± 12.1%) (p < 0.05). Average relative contributions of muscle and external loads to medial compartment loading were comparable between brace and no brace conditions (p ≥ 0.05).

Significance

Wearing a valgus knee brace did not immediately reduce peak tibiofemoral contact forces in healthy adults during normal walking. It appears this population may modulate muscle activation patterns to support brace-generated valgus moments, thereby maintaining normal walking knee moments and tibiofemoral contact forces. Future investigations are warranted to better understand effects of valgus knee brace in people with medial knee osteoarthritis using an electromyogram-driven neuromusculoskeletal model.



from #Audiology via ola Kala on Inoreader https://ift.tt/2qBZRXf
via IFTTT

Immediate effects of valgus knee bracing on tibiofemoral contact forces and knee muscle forces

Publication date: Available online 8 November 2018

Source: Gait & Posture

Author(s): Michelle Hall, Laura E. Diamond, Gavin K. Lenton, Claudio Pizzolato, David J. Saxby

Abstract
Background

Valgus knee braces have been reported to reduce the external knee adduction moment during walking. However, mechanistic investigations into the effects of valgus bracing on medial compartment contact forces using electromyogram-driven neuromusculoskeletal models are limited.

Research question

What are the immediate effects of valgus bracing on medial tibiofemoral contact forces and muscular loading of the tibiofemoral joint?

Methods

Sixteen (9 male) healthy adults (27.7 ± 4.4 years) performed 20 over-ground walking trials at self-selected speed both with and without an Ossür Unloader One® brace. Assessment order (i.e., with or without brace) was randomised and counterbalanced to prevent order effects. While walking, three-dimensional lower-body motion, ground reaction forces, and surface electromyograms from eight lower-limb muscles were acquired. These data were used to calibrate an electromyogram-driven neuromusculoskeletal model of muscle and tibiofemoral contact forces (N), from which muscle and external load contributions (%) to those contact forces were determined.

Results

Although walking with the brace resulted in no significant changes in peak tibiofemoral contact forces at the group-level, individual responses were variable and non-uniform. At the group-level, wearing the brace resulted in a 2.35% (95% CI 0.46-4.24; p = 0.02) greater relative contribution of muscle to lateral compartment contact loading (54.2 ± 11.1%) compared to not wearing the brace (51.8 ± 12.1%) (p < 0.05). Average relative contributions of muscle and external loads to medial compartment loading were comparable between brace and no brace conditions (p ≥ 0.05).

Significance

Wearing a valgus knee brace did not immediately reduce peak tibiofemoral contact forces in healthy adults during normal walking. It appears this population may modulate muscle activation patterns to support brace-generated valgus moments, thereby maintaining normal walking knee moments and tibiofemoral contact forces. Future investigations are warranted to better understand effects of valgus knee brace in people with medial knee osteoarthritis using an electromyogram-driven neuromusculoskeletal model.



from #Audiology via ola Kala on Inoreader https://ift.tt/2qBZRXf
via IFTTT

The frequency-following response (FFR) to speech stimuli: a normative dataset in healthy newborns

Publication date: Available online 9 November 2018

Source: Hearing Research

Author(s): Teresa Ribas-Prats, Laura Almeida, Jordi Costa-Faidella, Montse Plana, M.J. Corral, M. Dolores Gómez-Roig, Carles Escera

Abstract

The Frequency-Following Response (FFR) is a neurophonic auditory evoked potential that reflects the efficient encoding of speech sounds and is disrupted in a range of speech and language disorders. This raises the possibility to use it as a potential biomarker for literacy impairment. However, reference values for comparison with the normal population are not yet established. The present study pursues the collection of a normative database depicting the standard variability of the newborn FFR. FFRs were recorded to /da/ and /ga/ syllables in 46 neonates born at term. Seven parameters were retrieved in the time and frequency domains, and analyzed for normality and differences between stimuli. A comprehensive normative database of the newborn FFR is offered, with most parameters showing normal distributions and similar robust responses for /da/ and /ga/ stimuli. This is the first normative database of the FFR to characterize normal speech sound processing during the immediate postnatal days, and corroborates the possibility to record the FFRs in neonates at the maternity hospital room. This normative database constitutes the first step towards the detection of early FFR abnormalities in newborns that would announce later language impairment, allowing early preventive measures from the first days of life.



from #Audiology via ola Kala on Inoreader https://ift.tt/2PLhkLg
via IFTTT

The frequency-following response (FFR) to speech stimuli: a normative dataset in healthy newborns

Publication date: Available online 9 November 2018

Source: Hearing Research

Author(s): Teresa Ribas-Prats, Laura Almeida, Jordi Costa-Faidella, Montse Plana, M.J. Corral, M. Dolores Gómez-Roig, Carles Escera

Abstract

The Frequency-Following Response (FFR) is a neurophonic auditory evoked potential that reflects the efficient encoding of speech sounds and is disrupted in a range of speech and language disorders. This raises the possibility to use it as a potential biomarker for literacy impairment. However, reference values for comparison with the normal population are not yet established. The present study pursues the collection of a normative database depicting the standard variability of the newborn FFR. FFRs were recorded to /da/ and /ga/ syllables in 46 neonates born at term. Seven parameters were retrieved in the time and frequency domains, and analyzed for normality and differences between stimuli. A comprehensive normative database of the newborn FFR is offered, with most parameters showing normal distributions and similar robust responses for /da/ and /ga/ stimuli. This is the first normative database of the FFR to characterize normal speech sound processing during the immediate postnatal days, and corroborates the possibility to record the FFRs in neonates at the maternity hospital room. This normative database constitutes the first step towards the detection of early FFR abnormalities in newborns that would announce later language impairment, allowing early preventive measures from the first days of life.



from #Audiology via ola Kala on Inoreader https://ift.tt/2PLhkLg
via IFTTT

The frequency-following response (FFR) to speech stimuli: a normative dataset in healthy newborns

Publication date: Available online 9 November 2018

Source: Hearing Research

Author(s): Teresa Ribas-Prats, Laura Almeida, Jordi Costa-Faidella, Montse Plana, M.J. Corral, M. Dolores Gómez-Roig, Carles Escera

Abstract

The Frequency-Following Response (FFR) is a neurophonic auditory evoked potential that reflects the efficient encoding of speech sounds and is disrupted in a range of speech and language disorders. This raises the possibility to use it as a potential biomarker for literacy impairment. However, reference values for comparison with the normal population are not yet established. The present study pursues the collection of a normative database depicting the standard variability of the newborn FFR. FFRs were recorded to /da/ and /ga/ syllables in 46 neonates born at term. Seven parameters were retrieved in the time and frequency domains, and analyzed for normality and differences between stimuli. A comprehensive normative database of the newborn FFR is offered, with most parameters showing normal distributions and similar robust responses for /da/ and /ga/ stimuli. This is the first normative database of the FFR to characterize normal speech sound processing during the immediate postnatal days, and corroborates the possibility to record the FFRs in neonates at the maternity hospital room. This normative database constitutes the first step towards the detection of early FFR abnormalities in newborns that would announce later language impairment, allowing early preventive measures from the first days of life.



from #Audiology via ola Kala on Inoreader https://ift.tt/2PLhkLg
via IFTTT

The frequency-following response (FFR) to speech stimuli: a normative dataset in healthy newborns

Publication date: Available online 9 November 2018

Source: Hearing Research

Author(s): Teresa Ribas-Prats, Laura Almeida, Jordi Costa-Faidella, Montse Plana, M.J. Corral, M. Dolores Gómez-Roig, Carles Escera

Abstract

The Frequency-Following Response (FFR) is a neurophonic auditory evoked potential that reflects the efficient encoding of speech sounds and is disrupted in a range of speech and language disorders. This raises the possibility to use it as a potential biomarker for literacy impairment. However, reference values for comparison with the normal population are not yet established. The present study pursues the collection of a normative database depicting the standard variability of the newborn FFR. FFRs were recorded to /da/ and /ga/ syllables in 46 neonates born at term. Seven parameters were retrieved in the time and frequency domains, and analyzed for normality and differences between stimuli. A comprehensive normative database of the newborn FFR is offered, with most parameters showing normal distributions and similar robust responses for /da/ and /ga/ stimuli. This is the first normative database of the FFR to characterize normal speech sound processing during the immediate postnatal days, and corroborates the possibility to record the FFRs in neonates at the maternity hospital room. This normative database constitutes the first step towards the detection of early FFR abnormalities in newborns that would announce later language impairment, allowing early preventive measures from the first days of life.



from #Audiology via ola Kala on Inoreader https://ift.tt/2PLhkLg
via IFTTT

Compensatory and Serial Processing Models for Relating Electrophysiology, Speech Understanding, and Cognition

Objectives: The objective of this study was to develop a framework for investigating the roles of neural coding and cognition in speech perception. Design: N1 and P3 auditory evoked potentials, QuickSIN speech understanding scores, and the Digit Symbol Coding cognitive test results were used to test the accuracy of either a compensatory processing model or serial processing model. Results: The current dataset demonstrated that neither the compensatory nor the serial processing model were well supported. An additive processing model may best represent the relationships in these data. Conclusions: With the outcome measures used in this study, it is apparent that an additive processing model, where exogenous neural coding and higher order cognition contribute independently, best describes the effects of neural coding and cognition on speech perception. Further testing with additional outcome measures and a larger number of subjects is needed to confirm and further clarify the relationships between these processing domains. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Jane Gordon, Dan McDermott, and Tina Penman for their efforts with data collection and processing. This work was supported by the U.S. Department of Veterans Affairs (RR&D Service, C74554R) and the U.S. National Institutes of Health (NIDCD-DC15240). The contents do not represent the views of the U.S. Department of Veterans Affairs or the U.S. government. The authors have no conflicts of interest to disclose. Address for correspondence: Curtis J. Billings, National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, Portland, OR, USA. E-mail: curtis.billings2@va.gov Received January 30, 2018; accepted September 4, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Ds8Vql
via IFTTT

Compensatory and Serial Processing Models for Relating Electrophysiology, Speech Understanding, and Cognition

Objectives: The objective of this study was to develop a framework for investigating the roles of neural coding and cognition in speech perception. Design: N1 and P3 auditory evoked potentials, QuickSIN speech understanding scores, and the Digit Symbol Coding cognitive test results were used to test the accuracy of either a compensatory processing model or serial processing model. Results: The current dataset demonstrated that neither the compensatory nor the serial processing model were well supported. An additive processing model may best represent the relationships in these data. Conclusions: With the outcome measures used in this study, it is apparent that an additive processing model, where exogenous neural coding and higher order cognition contribute independently, best describes the effects of neural coding and cognition on speech perception. Further testing with additional outcome measures and a larger number of subjects is needed to confirm and further clarify the relationships between these processing domains. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Jane Gordon, Dan McDermott, and Tina Penman for their efforts with data collection and processing. This work was supported by the U.S. Department of Veterans Affairs (RR&D Service, C74554R) and the U.S. National Institutes of Health (NIDCD-DC15240). The contents do not represent the views of the U.S. Department of Veterans Affairs or the U.S. government. The authors have no conflicts of interest to disclose. Address for correspondence: Curtis J. Billings, National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, Portland, OR, USA. E-mail: curtis.billings2@va.gov Received January 30, 2018; accepted September 4, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Ds8Vql
via IFTTT