OtoRhinoLaryngology by Sfakianakis G.Alexandros Sfakianakis G.Alexandros,Anapafseos 5 Agios Nikolaos 72100 Crete Greece,tel : 00302841026182,00306932607174
Παρασκευή 29 Ιανουαρίου 2016
Is Hearing Loss Associated with Poorer Health in Older Adults Who Might Benefit from Hearing Screening?.
from #Audiology via ola Kala on Inoreader http://ift.tt/1TsXdM1
via IFTTT
Is Hearing Loss Associated with Poorer Health in Older Adults Who Might Benefit from Hearing Screening?.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/1TsXdM1
via IFTTT
Is Hearing Loss Associated with Poorer Health in Older Adults Who Might Benefit from Hearing Screening?.
from #Audiology via ola Kala on Inoreader http://ift.tt/1TsXdM1
via IFTTT
Decoding four different sound-categories in the auditory cortex using functional near-infrared spectroscopy
Source:Hearing Research
Author(s): Keum-Shik Hong, Hendrik Santosa
The ability of the auditory cortex in the brain to distinguish different sounds is important in daily life. This study investigated whether activations in the auditory cortex caused by different sounds can be distinguished using functional near-infrared spectroscopy (fNIRS). The hemodynamic responses (HRs) in both hemispheres using fNIRS were measured in 18 subjects while exposing them to four sound categories (English-speech, non-English-speech, annoying sounds, and nature sounds). As features for classifying the different signals, the mean, slope, and skewness of the oxy-hemoglobin (HbO) signal were used. With regard to the language-related stimuli, the HRs evoked by understandable speech (English) were observed in a broader brain region than were those evoked by non-English speech. Also, the magnitudes of the HbO signals evoked by English-speech were higher than those of non-English speech. The ratio of the peak values of non-English and English speech was 72.5%. Also, the brain region evoked by annoying sounds was wider than that by nature sounds. However, the signal strength for nature sounds was stronger than that for annoying sounds. Finally, for brain-computer interface (BCI) purposes, the linear discriminant analysis (LDA) and support vector machine (SVM) classifiers were applied to the four sound categories. The overall classification performance for the left hemisphere was higher than that for the right hemisphere. Therefore, for decoding of auditory commands, the left hemisphere is recommended. Also, in two-class classification, the annoying vs. nature sounds comparison provides a higher classification accuracy than the English vs. non-English speech comparison. Finally, LDA performs better than SVM.
from #Audiology via ola Kala on Inoreader http://ift.tt/1QLRDDO
via IFTTT
Decoding four different sound-categories in the auditory cortex using functional near-infrared spectroscopy
Source:Hearing Research
Author(s): Keum-Shik Hong, Hendrik Santosa
The ability of the auditory cortex in the brain to distinguish different sounds is important in daily life. This study investigated whether activations in the auditory cortex caused by different sounds can be distinguished using functional near-infrared spectroscopy (fNIRS). The hemodynamic responses (HRs) in both hemispheres using fNIRS were measured in 18 subjects while exposing them to four sound categories (English-speech, non-English-speech, annoying sounds, and nature sounds). As features for classifying the different signals, the mean, slope, and skewness of the oxy-hemoglobin (HbO) signal were used. With regard to the language-related stimuli, the HRs evoked by understandable speech (English) were observed in a broader brain region than were those evoked by non-English speech. Also, the magnitudes of the HbO signals evoked by English-speech were higher than those of non-English speech. The ratio of the peak values of non-English and English speech was 72.5%. Also, the brain region evoked by annoying sounds was wider than that by nature sounds. However, the signal strength for nature sounds was stronger than that for annoying sounds. Finally, for brain-computer interface (BCI) purposes, the linear discriminant analysis (LDA) and support vector machine (SVM) classifiers were applied to the four sound categories. The overall classification performance for the left hemisphere was higher than that for the right hemisphere. Therefore, for decoding of auditory commands, the left hemisphere is recommended. Also, in two-class classification, the annoying vs. nature sounds comparison provides a higher classification accuracy than the English vs. non-English speech comparison. Finally, LDA performs better than SVM.
from #Audiology via ola Kala on Inoreader http://ift.tt/1QLRDDO
via IFTTT
Decoding four different sound-categories in the auditory cortex using functional near-infrared spectroscopy
Source:Hearing Research
Author(s): Keum-Shik Hong, Hendrik Santosa
The ability of the auditory cortex in the brain to distinguish different sounds is important in daily life. This study investigated whether activations in the auditory cortex caused by different sounds can be distinguished using functional near-infrared spectroscopy (fNIRS). The hemodynamic responses (HRs) in both hemispheres using fNIRS were measured in 18 subjects while exposing them to four sound categories (English-speech, non-English-speech, annoying sounds, and nature sounds). As features for classifying the different signals, the mean, slope, and skewness of the oxy-hemoglobin (HbO) signal were used. With regard to the language-related stimuli, the HRs evoked by understandable speech (English) were observed in a broader brain region than were those evoked by non-English speech. Also, the magnitudes of the HbO signals evoked by English-speech were higher than those of non-English speech. The ratio of the peak values of non-English and English speech was 72.5%. Also, the brain region evoked by annoying sounds was wider than that by nature sounds. However, the signal strength for nature sounds was stronger than that for annoying sounds. Finally, for brain-computer interface (BCI) purposes, the linear discriminant analysis (LDA) and support vector machine (SVM) classifiers were applied to the four sound categories. The overall classification performance for the left hemisphere was higher than that for the right hemisphere. Therefore, for decoding of auditory commands, the left hemisphere is recommended. Also, in two-class classification, the annoying vs. nature sounds comparison provides a higher classification accuracy than the English vs. non-English speech comparison. Finally, LDA performs better than SVM.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/1QLRDDO
via IFTTT
Decoding four different sound-categories in the auditory cortex using functional near-infrared spectroscopy
Source:Hearing Research
Author(s): Keum-Shik Hong, Hendrik Santosa
The ability of the auditory cortex in the brain to distinguish different sounds is important in daily life. This study investigated whether activations in the auditory cortex caused by different sounds can be distinguished using functional near-infrared spectroscopy (fNIRS). The hemodynamic responses (HRs) in both hemispheres using fNIRS were measured in 18 subjects while exposing them to four sound categories (English-speech, non-English-speech, annoying sounds, and nature sounds). As features for classifying the different signals, the mean, slope, and skewness of the oxy-hemoglobin (HbO) signal were used. With regard to the language-related stimuli, the HRs evoked by understandable speech (English) were observed in a broader brain region than were those evoked by non-English speech. Also, the magnitudes of the HbO signals evoked by English-speech were higher than those of non-English speech. The ratio of the peak values of non-English and English speech was 72.5%. Also, the brain region evoked by annoying sounds was wider than that by nature sounds. However, the signal strength for nature sounds was stronger than that for annoying sounds. Finally, for brain-computer interface (BCI) purposes, the linear discriminant analysis (LDA) and support vector machine (SVM) classifiers were applied to the four sound categories. The overall classification performance for the left hemisphere was higher than that for the right hemisphere. Therefore, for decoding of auditory commands, the left hemisphere is recommended. Also, in two-class classification, the annoying vs. nature sounds comparison provides a higher classification accuracy than the English vs. non-English speech comparison. Finally, LDA performs better than SVM.
from #Audiology via ola Kala on Inoreader http://ift.tt/1QLRDDO
via IFTTT
Decoding four different sound-categories in the auditory cortex using functional near-infrared spectroscopy
Source:Hearing Research
Author(s): Keum-Shik Hong, Hendrik Santosa
The ability of the auditory cortex in the brain to distinguish different sounds is important in daily life. This study investigated whether activations in the auditory cortex caused by different sounds can be distinguished using functional near-infrared spectroscopy (fNIRS). The hemodynamic responses (HRs) in both hemispheres using fNIRS were measured in 18 subjects while exposing them to four sound categories (English-speech, non-English-speech, annoying sounds, and nature sounds). As features for classifying the different signals, the mean, slope, and skewness of the oxy-hemoglobin (HbO) signal were used. With regard to the language-related stimuli, the HRs evoked by understandable speech (English) were observed in a broader brain region than were those evoked by non-English speech. Also, the magnitudes of the HbO signals evoked by English-speech were higher than those of non-English speech. The ratio of the peak values of non-English and English speech was 72.5%. Also, the brain region evoked by annoying sounds was wider than that by nature sounds. However, the signal strength for nature sounds was stronger than that for annoying sounds. Finally, for brain-computer interface (BCI) purposes, the linear discriminant analysis (LDA) and support vector machine (SVM) classifiers were applied to the four sound categories. The overall classification performance for the left hemisphere was higher than that for the right hemisphere. Therefore, for decoding of auditory commands, the left hemisphere is recommended. Also, in two-class classification, the annoying vs. nature sounds comparison provides a higher classification accuracy than the English vs. non-English speech comparison. Finally, LDA performs better than SVM.
from #Audiology via ola Kala on Inoreader http://ift.tt/1QLRDDO
via IFTTT
Brain's 'amplifier' compensates for lost inner ear function
from #Audiology via ola Kala on Inoreader http://ift.tt/1m0ztRM
via IFTTT
Brain's 'amplifier' compensates for lost inner ear function
from #Audiology via ola Kala on Inoreader http://ift.tt/1m0ztRM
via IFTTT
A Challenging Form of Non-autoimmune Insulin-Dependent Diabetes in a Wolfram Syndrome Patient with a Novel Sequence Variant.
A Challenging Form of Non-autoimmune Insulin-Dependent Diabetes in a Wolfram Syndrome Patient with a Novel Sequence Variant.
J Diabetes Metab. 2015 Jun;6(7):1-5
Authors: Paris LP, Usui Y, Serino J, Sá J, Friedlander M
Abstract
Wolfram syndrome type 1 is a rare, autosomal recessive, neurodegenerative disorder that is diagnosed when insulin-dependent diabetes of non-auto-immune origin and optic atrophy are concomitantly present. Wolfram syndrome is also designated by DIDMOAD that stands for its most frequent manifestations: diabetes insipidus, diabetes mellitus, optic atrophy and deafness. With disease progression, patients also commonly develop severe neurological and genito-urinary tract abnormalities. When compared to the general type 1 diabetic population, patients with Wolfram Syndrome have been reported to have a form of diabetes that is more easily controlled and with less microvascular complications, such as diabetic retinopathy. We report a case of Wolfram syndrome in a 16-year-old male patient who presented with progressive optic atrophy and severe diabetes with very challenging glycemic control despite intensive therapy since diagnosis at the age of 6. Despite inadequate metabolic control he did not develop any diabetic microvascular complications during the 10-year follow-up period. To further investigate potential causes for this metabolic idiosyncrasy, we performed genetic analyses that revealed a novel combination of homozygous sequence variants that are likely the cause of the syndrome in this family. The identified genotype included a novel sequence variant in the Wolfram syndrome type 1 gene along with a previously described one, which had initially been associated with isolated low frequency sensorineural hearing loss (LFSNHL). Interestingly, our patient did not show any abnormal findings with audiometry testing.
PMID: 26819810 [PubMed - as supplied by publisher]
from #Audiology via xlomafota13 on Inoreader http://ift.tt/20xkezn
via IFTTT
Identification of a recurrent mitochondrial mutation in a Japanese family with palmoplantar keratoderma, nail dystrophy, and deafness.
Related Articles |
Identification of a recurrent mitochondrial mutation in a Japanese family with palmoplantar keratoderma, nail dystrophy, and deafness.
Eur J Dermatol. 2015 Jan-Feb;25(1):79-81
Authors: Hayashi R, Fujiwara H, Morishita M, Ito M, Shimomura Y
PMID: 25513986 [PubMed - indexed for MEDLINE]
from #Audiology via xlomafota13 on Inoreader http://ift.tt/20xk5Ms
via IFTTT
MEKK4 Signaling Regulates Sensory Cell Development and Function in the Mouse Inner Ear.
MEKK4 Signaling Regulates Sensory Cell Development and Function in the Mouse Inner Ear.
J Neurosci. 2016 Jan 27;36(4):1347-61
Authors: Haque K, Pandey AK, Zheng HW, Riazuddin S, Sha SH, Puligilla C
Abstract
UNLABELLED: Mechanosensory hair cells (HCs) residing in the inner ear are critical for hearing and balance. Precise coordination of proliferation, sensory specification, and differentiation during development is essential to ensure the correct patterning of HCs in the cochlear and vestibular epithelium. Recent studies have revealed that FGF20 signaling is vital for proper HC differentiation. However, the mechanisms by which FGF20 signaling promotes HC differentiation remain unknown. Here, we show that mitogen-activated protein 3 kinase 4 (MEKK4) expression is highly regulated during inner ear development and is critical to normal cytoarchitecture and function. Mice homozygous for a kinase-inactive MEKK4 mutation exhibit significant hearing loss. Lack of MEKK4 activity in vivo also leads to a significant reduction in the number of cochlear and vestibular HCs, suggesting that MEKK4 activity is essential for overall development of HCs within the inner ear. Furthermore, we show that loss of FGF20 signaling in vivo inhibits MEKK4 activity, whereas gain of Fgf20 function stimulates MEKK4 expression, suggesting that Fgf20 modulates MEKK4 activity to regulate cellular differentiation. Finally, we demonstrate, for the first time, that MEKK4 acts as a critical node to integrate FGF20-FGFR1 signaling responses to specifically influence HC development and that FGFR1 signaling through activation of MEKK4 is necessary for outer hair cell differentiation. Collectively, this study provides compelling evidence of an essential role for MEKK4 in inner ear morphogenesis and identifies the requirement of MEKK4 expression in regulating the specific response of FGFR1 during HC development and FGF20/FGFR1 signaling activated MEKK4 for normal sensory cell differentiation.
SIGNIFICANCE STATEMENT: Sensory hair cells (HCs) are the mechanoreceptors within the inner ear responsible for our sense of hearing. HCs are formed before birth, and mammals lack the ability to restore the sensory deficits associated with their loss. In this study, we show, for the first time, that MEKK4 signaling is essential for the development of normal cytoarchitecture and hearing function as MEKK4 signaling-deficient mice exhibit a significant reduction of HCs and a hearing loss. We also identify MEKK4 as a critical hub kinase for FGF20-FGFR1 signaling to induce HC differentiation in the mammalian cochlea. These results reveal a new paradigm in the regulation of HC differentiation and provide significant new insights into the mechanism of Fgf signaling governing HC formation.
PMID: 26818521 [PubMed - in process]
from #Audiology via ola Kala on Inoreader http://ift.tt/1OTYtES
via IFTTT
MEKK4 Signaling Regulates Sensory Cell Development and Function in the Mouse Inner Ear.
MEKK4 Signaling Regulates Sensory Cell Development and Function in the Mouse Inner Ear.
J Neurosci. 2016 Jan 27;36(4):1347-61
Authors: Haque K, Pandey AK, Zheng HW, Riazuddin S, Sha SH, Puligilla C
Abstract
UNLABELLED: Mechanosensory hair cells (HCs) residing in the inner ear are critical for hearing and balance. Precise coordination of proliferation, sensory specification, and differentiation during development is essential to ensure the correct patterning of HCs in the cochlear and vestibular epithelium. Recent studies have revealed that FGF20 signaling is vital for proper HC differentiation. However, the mechanisms by which FGF20 signaling promotes HC differentiation remain unknown. Here, we show that mitogen-activated protein 3 kinase 4 (MEKK4) expression is highly regulated during inner ear development and is critical to normal cytoarchitecture and function. Mice homozygous for a kinase-inactive MEKK4 mutation exhibit significant hearing loss. Lack of MEKK4 activity in vivo also leads to a significant reduction in the number of cochlear and vestibular HCs, suggesting that MEKK4 activity is essential for overall development of HCs within the inner ear. Furthermore, we show that loss of FGF20 signaling in vivo inhibits MEKK4 activity, whereas gain of Fgf20 function stimulates MEKK4 expression, suggesting that Fgf20 modulates MEKK4 activity to regulate cellular differentiation. Finally, we demonstrate, for the first time, that MEKK4 acts as a critical node to integrate FGF20-FGFR1 signaling responses to specifically influence HC development and that FGFR1 signaling through activation of MEKK4 is necessary for outer hair cell differentiation. Collectively, this study provides compelling evidence of an essential role for MEKK4 in inner ear morphogenesis and identifies the requirement of MEKK4 expression in regulating the specific response of FGFR1 during HC development and FGF20/FGFR1 signaling activated MEKK4 for normal sensory cell differentiation.
SIGNIFICANCE STATEMENT: Sensory hair cells (HCs) are the mechanoreceptors within the inner ear responsible for our sense of hearing. HCs are formed before birth, and mammals lack the ability to restore the sensory deficits associated with their loss. In this study, we show, for the first time, that MEKK4 signaling is essential for the development of normal cytoarchitecture and hearing function as MEKK4 signaling-deficient mice exhibit a significant reduction of HCs and a hearing loss. We also identify MEKK4 as a critical hub kinase for FGF20-FGFR1 signaling to induce HC differentiation in the mammalian cochlea. These results reveal a new paradigm in the regulation of HC differentiation and provide significant new insights into the mechanism of Fgf signaling governing HC formation.
PMID: 26818521 [PubMed - in process]
from #Audiology via ola Kala on Inoreader http://ift.tt/1OTYtES
via IFTTT