Τρίτη 13 Μαρτίου 2018

Examination of Prosody and Timbre Perception in Adults With Cochlear Implants Comparing Different Fine Structure Coding Strategies

Purpose
This study aimed to investigate whether adults with cochlear implants benefit from a change of fine structure (FS) coding strategies regarding the discrimination of prosodic speech cues, timbre cues, and the identification of natural instruments. The FS processing (FSP) coding strategy was compared to 2 settings of the FS4 strategy.
Method
A longitudinal crossover, double-blinded study was conducted. This study consisted of 2 parts, with 14 participants in the first part and 12 participants in the second part. Each part lasted 3 months, in which participants were alternately fitted with either the established FSP strategy or 1 of the 2 newly developed FS4 settings. Participants had to complete an intonation identification test; a timbre discrimination test in which 1 of 2 isolated cues changed, either the spectral centroid or the spectral irregularity; and an instrument identification test.
Results
A significant effect was seen in the discrimination of spectral irregularity with 1 of the 2 FS4 settings. The improvement was seen in the FS4 setting in which the upper envelope channels had a low stimulation rate. This improvement was not seen with the FS4 setting that had a higher stimulation rate on the envelope channels.
Conclusions
In general, the FSP strategy and the 2 settings of the FS4 strategy provided similar levels in the perception of prosody and timbre cues, as well as in the identification of instruments.

from #Audiology via ola Kala on Inoreader http://ift.tt/2IqaKmy
via IFTTT

Patient Acceptance of Invasive Treatments for Tinnitus

Purpose
The field of neuromodulation is currently seeking to treat a wide range of disorders with various types of invasive devices. In recent years, several preclinical trials and case reports in humans have been published on their potential for chronic tinnitus. However, studies to obtain insight into patients' willingness to undergo these treatments are scarce. The aim of this survey study was to find out whether tinnitus patients are willing to undergo invasive neuromodulation when taking its risks, costs, and potential benefits into account.
Method
A Visual Analog Scale (VAS, 0–10) was used to measure the outcome. Spearman's rank-order correlation coefficients were computed to determine the correlation between patient characteristics and acceptance rates.
Results
Around one fifth of the patients were reasonably willing to undergo invasive treatment (VAS 5–7), and around one fifth were fully willing to do so (VAS 8–10). Hearing aids, used as a control, were accepted most, followed by cochlear implantation, deep brain stimulation, and cortical stimulation. Acceptance rates were slightly higher when the chance of cure was higher. Patients with a history of attempted treatments were more eager than others to find a new treatment for tinnitus.
Conclusions
A considerable proportion of patients with tinnitus would accept a variety of invasive treatments despite the associated risks or costs. When clinical neuromodulatory studies for tinnitus are to be performed, particular attention should be given to obtaining informed consent, including explaining the potential risks and providing a realistic outcome expectation.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pdBqyb
via IFTTT

Examination of Prosody and Timbre Perception in Adults With Cochlear Implants Comparing Different Fine Structure Coding Strategies

Purpose
This study aimed to investigate whether adults with cochlear implants benefit from a change of fine structure (FS) coding strategies regarding the discrimination of prosodic speech cues, timbre cues, and the identification of natural instruments. The FS processing (FSP) coding strategy was compared to 2 settings of the FS4 strategy.
Method
A longitudinal crossover, double-blinded study was conducted. This study consisted of 2 parts, with 14 participants in the first part and 12 participants in the second part. Each part lasted 3 months, in which participants were alternately fitted with either the established FSP strategy or 1 of the 2 newly developed FS4 settings. Participants had to complete an intonation identification test; a timbre discrimination test in which 1 of 2 isolated cues changed, either the spectral centroid or the spectral irregularity; and an instrument identification test.
Results
A significant effect was seen in the discrimination of spectral irregularity with 1 of the 2 FS4 settings. The improvement was seen in the FS4 setting in which the upper envelope channels had a low stimulation rate. This improvement was not seen with the FS4 setting that had a higher stimulation rate on the envelope channels.
Conclusions
In general, the FSP strategy and the 2 settings of the FS4 strategy provided similar levels in the perception of prosody and timbre cues, as well as in the identification of instruments.

from #Audiology via ola Kala on Inoreader http://ift.tt/2IqaKmy
via IFTTT

Patient Acceptance of Invasive Treatments for Tinnitus

Purpose
The field of neuromodulation is currently seeking to treat a wide range of disorders with various types of invasive devices. In recent years, several preclinical trials and case reports in humans have been published on their potential for chronic tinnitus. However, studies to obtain insight into patients' willingness to undergo these treatments are scarce. The aim of this survey study was to find out whether tinnitus patients are willing to undergo invasive neuromodulation when taking its risks, costs, and potential benefits into account.
Method
A Visual Analog Scale (VAS, 0–10) was used to measure the outcome. Spearman's rank-order correlation coefficients were computed to determine the correlation between patient characteristics and acceptance rates.
Results
Around one fifth of the patients were reasonably willing to undergo invasive treatment (VAS 5–7), and around one fifth were fully willing to do so (VAS 8–10). Hearing aids, used as a control, were accepted most, followed by cochlear implantation, deep brain stimulation, and cortical stimulation. Acceptance rates were slightly higher when the chance of cure was higher. Patients with a history of attempted treatments were more eager than others to find a new treatment for tinnitus.
Conclusions
A considerable proportion of patients with tinnitus would accept a variety of invasive treatments despite the associated risks or costs. When clinical neuromodulatory studies for tinnitus are to be performed, particular attention should be given to obtaining informed consent, including explaining the potential risks and providing a realistic outcome expectation.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pdBqyb
via IFTTT

Examination of Prosody and Timbre Perception in Adults With Cochlear Implants Comparing Different Fine Structure Coding Strategies

Purpose
This study aimed to investigate whether adults with cochlear implants benefit from a change of fine structure (FS) coding strategies regarding the discrimination of prosodic speech cues, timbre cues, and the identification of natural instruments. The FS processing (FSP) coding strategy was compared to 2 settings of the FS4 strategy.
Method
A longitudinal crossover, double-blinded study was conducted. This study consisted of 2 parts, with 14 participants in the first part and 12 participants in the second part. Each part lasted 3 months, in which participants were alternately fitted with either the established FSP strategy or 1 of the 2 newly developed FS4 settings. Participants had to complete an intonation identification test; a timbre discrimination test in which 1 of 2 isolated cues changed, either the spectral centroid or the spectral irregularity; and an instrument identification test.
Results
A significant effect was seen in the discrimination of spectral irregularity with 1 of the 2 FS4 settings. The improvement was seen in the FS4 setting in which the upper envelope channels had a low stimulation rate. This improvement was not seen with the FS4 setting that had a higher stimulation rate on the envelope channels.
Conclusions
In general, the FSP strategy and the 2 settings of the FS4 strategy provided similar levels in the perception of prosody and timbre cues, as well as in the identification of instruments.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2IqaKmy
via IFTTT

Patient Acceptance of Invasive Treatments for Tinnitus

Purpose
The field of neuromodulation is currently seeking to treat a wide range of disorders with various types of invasive devices. In recent years, several preclinical trials and case reports in humans have been published on their potential for chronic tinnitus. However, studies to obtain insight into patients' willingness to undergo these treatments are scarce. The aim of this survey study was to find out whether tinnitus patients are willing to undergo invasive neuromodulation when taking its risks, costs, and potential benefits into account.
Method
A Visual Analog Scale (VAS, 0–10) was used to measure the outcome. Spearman's rank-order correlation coefficients were computed to determine the correlation between patient characteristics and acceptance rates.
Results
Around one fifth of the patients were reasonably willing to undergo invasive treatment (VAS 5–7), and around one fifth were fully willing to do so (VAS 8–10). Hearing aids, used as a control, were accepted most, followed by cochlear implantation, deep brain stimulation, and cortical stimulation. Acceptance rates were slightly higher when the chance of cure was higher. Patients with a history of attempted treatments were more eager than others to find a new treatment for tinnitus.
Conclusions
A considerable proportion of patients with tinnitus would accept a variety of invasive treatments despite the associated risks or costs. When clinical neuromodulatory studies for tinnitus are to be performed, particular attention should be given to obtaining informed consent, including explaining the potential risks and providing a realistic outcome expectation.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2pdBqyb
via IFTTT

Clinical Strategies for Sampling Word Recognition Performance

Purpose
Computer simulation was used to estimate the statistical properties of searches for maximum word recognition ability (PB max). These involve presenting multiple lists and discarding all scores but that of the 1 list that produced the highest score. The simulations, which model limitations inherent in the precision of word recognition scores, were done to inform clinical protocols. A secondary consideration was a derivation of 95% confidence intervals for significant changes in score from phonemic scoring of a 50-word list.
Method
The PB max simulations were conducted on a “client” with flat performance intensity functions. The client's performance was assumed to be 60% initially and 40% for a second assessment. Thousands of estimates were obtained to examine the precision of (a) single lists and (b) multiple lists using a PB max procedure. This method permitted summarizing the precision for assessing a 20% drop in performance.
Results
A single 25-word list could identify only 58.4% of the cases in which performance fell from 60% to 40%. A single 125-word list identified 99.8% of the declines correctly. Presenting 3 or 5 lists to find PB max produced an undesirable finding: an increase in the word recognition score.
Conclusions
A 25-word list produces unacceptably low precision for making clinical decisions. This finding holds in both single and multiple 25-word lists, as in a search for PB max. A table is provided, giving estimates of 95% critical ranges for successive presentations of a 50-word list analyzed by the number of phonemes correctly identified.

from #Audiology via ola Kala on Inoreader http://ift.tt/2p8IHjO
via IFTTT

Psychophysical Boundary for Categorization of Voiced–Voiceless Stop Consonants in Native Japanese Speakers

Purpose
The purpose of this study was to investigate the psychophysical boundary used for categorization of voiced–voiceless stop consonants in native Japanese speakers.
Method
Twelve native Japanese speakers participated in the experiment. The stimuli were synthetic stop consonant–vowel stimuli varying in voice onset time (VOT) with manipulation of the amplitude of the initial noise portion and the first formant (F1) frequency of the periodic portion. There were 3 tasks, namely, speech identification to either /d/ or /t/, detection of the noise portion, and simultaneity judgment of onsets of the noise and periodic portions.
Results
The VOT boundaries of /d/–/t/ were close to the shortest VOT values that allowed for detection of the noise portion but not to those for perceived nonsimultaneity of the noise and periodic portions. The slopes of noise detection functions along VOT were as sharp as those of voiced–voiceless identification functions. In addition, the effects of manipulating the amplitude of the noise portion and the F1 frequency of the periodic portion on the detection of the noise portion were similar to those on voiced–voiceless identification.
Conclusion
The psychophysical boundary of perception of the initial noise portion masked by the following periodic portion may be used for voiced–voiceless categorization by Japanese speakers.

from #Audiology via ola Kala on Inoreader http://ift.tt/2DnAmgo
via IFTTT

Clinical Strategies for Sampling Word Recognition Performance

Purpose
Computer simulation was used to estimate the statistical properties of searches for maximum word recognition ability (PB max). These involve presenting multiple lists and discarding all scores but that of the 1 list that produced the highest score. The simulations, which model limitations inherent in the precision of word recognition scores, were done to inform clinical protocols. A secondary consideration was a derivation of 95% confidence intervals for significant changes in score from phonemic scoring of a 50-word list.
Method
The PB max simulations were conducted on a “client” with flat performance intensity functions. The client's performance was assumed to be 60% initially and 40% for a second assessment. Thousands of estimates were obtained to examine the precision of (a) single lists and (b) multiple lists using a PB max procedure. This method permitted summarizing the precision for assessing a 20% drop in performance.
Results
A single 25-word list could identify only 58.4% of the cases in which performance fell from 60% to 40%. A single 125-word list identified 99.8% of the declines correctly. Presenting 3 or 5 lists to find PB max produced an undesirable finding: an increase in the word recognition score.
Conclusions
A 25-word list produces unacceptably low precision for making clinical decisions. This finding holds in both single and multiple 25-word lists, as in a search for PB max. A table is provided, giving estimates of 95% critical ranges for successive presentations of a 50-word list analyzed by the number of phonemes correctly identified.

from #Audiology via ola Kala on Inoreader http://ift.tt/2p8IHjO
via IFTTT

Psychophysical Boundary for Categorization of Voiced–Voiceless Stop Consonants in Native Japanese Speakers

Purpose
The purpose of this study was to investigate the psychophysical boundary used for categorization of voiced–voiceless stop consonants in native Japanese speakers.
Method
Twelve native Japanese speakers participated in the experiment. The stimuli were synthetic stop consonant–vowel stimuli varying in voice onset time (VOT) with manipulation of the amplitude of the initial noise portion and the first formant (F1) frequency of the periodic portion. There were 3 tasks, namely, speech identification to either /d/ or /t/, detection of the noise portion, and simultaneity judgment of onsets of the noise and periodic portions.
Results
The VOT boundaries of /d/–/t/ were close to the shortest VOT values that allowed for detection of the noise portion but not to those for perceived nonsimultaneity of the noise and periodic portions. The slopes of noise detection functions along VOT were as sharp as those of voiced–voiceless identification functions. In addition, the effects of manipulating the amplitude of the noise portion and the F1 frequency of the periodic portion on the detection of the noise portion were similar to those on voiced–voiceless identification.
Conclusion
The psychophysical boundary of perception of the initial noise portion masked by the following periodic portion may be used for voiced–voiceless categorization by Japanese speakers.

from #Audiology via ola Kala on Inoreader http://ift.tt/2DnAmgo
via IFTTT

Clinical Strategies for Sampling Word Recognition Performance

Purpose
Computer simulation was used to estimate the statistical properties of searches for maximum word recognition ability (PB max). These involve presenting multiple lists and discarding all scores but that of the 1 list that produced the highest score. The simulations, which model limitations inherent in the precision of word recognition scores, were done to inform clinical protocols. A secondary consideration was a derivation of 95% confidence intervals for significant changes in score from phonemic scoring of a 50-word list.
Method
The PB max simulations were conducted on a “client” with flat performance intensity functions. The client's performance was assumed to be 60% initially and 40% for a second assessment. Thousands of estimates were obtained to examine the precision of (a) single lists and (b) multiple lists using a PB max procedure. This method permitted summarizing the precision for assessing a 20% drop in performance.
Results
A single 25-word list could identify only 58.4% of the cases in which performance fell from 60% to 40%. A single 125-word list identified 99.8% of the declines correctly. Presenting 3 or 5 lists to find PB max produced an undesirable finding: an increase in the word recognition score.
Conclusions
A 25-word list produces unacceptably low precision for making clinical decisions. This finding holds in both single and multiple 25-word lists, as in a search for PB max. A table is provided, giving estimates of 95% critical ranges for successive presentations of a 50-word list analyzed by the number of phonemes correctly identified.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2p8IHjO
via IFTTT

Psychophysical Boundary for Categorization of Voiced–Voiceless Stop Consonants in Native Japanese Speakers

Purpose
The purpose of this study was to investigate the psychophysical boundary used for categorization of voiced–voiceless stop consonants in native Japanese speakers.
Method
Twelve native Japanese speakers participated in the experiment. The stimuli were synthetic stop consonant–vowel stimuli varying in voice onset time (VOT) with manipulation of the amplitude of the initial noise portion and the first formant (F1) frequency of the periodic portion. There were 3 tasks, namely, speech identification to either /d/ or /t/, detection of the noise portion, and simultaneity judgment of onsets of the noise and periodic portions.
Results
The VOT boundaries of /d/–/t/ were close to the shortest VOT values that allowed for detection of the noise portion but not to those for perceived nonsimultaneity of the noise and periodic portions. The slopes of noise detection functions along VOT were as sharp as those of voiced–voiceless identification functions. In addition, the effects of manipulating the amplitude of the noise portion and the F1 frequency of the periodic portion on the detection of the noise portion were similar to those on voiced–voiceless identification.
Conclusion
The psychophysical boundary of perception of the initial noise portion masked by the following periodic portion may be used for voiced–voiceless categorization by Japanese speakers.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2DnAmgo
via IFTTT

Head shadow enhancement with low-frequency beamforming improves sound localization and speech perception for simulated bimodal listeners

Publication date: Available online 12 March 2018
Source:Hearing Research
Author(s): Benjamin Dieudonné, Tom Francart
Many hearing-impaired listeners struggle to localize sounds due to poor availability of binaural cues. Listeners with a cochlear implant and a contralateral hearing aid – so-called bimodal listeners – are amongst the worst performers, as both interaural time and level differences are poorly transmitted. We present a new method to enhance head shadow in the low frequencies. Head shadow enhancement is achieved with a fixed beamformer with contralateral attenuation in each ear. The method results in interaural level differences which vary monotonically with angle. It also improves low-frequency signal-to-noise ratios in conditions with spatially separated speech and noise. We validated the method in two experiments with acoustic simulations of bimodal listening. In the localization experiment, performance improved from 50.5∘ to 26.8∘ root-mean-square error compared with standard omni-directional microphones. In the speech-in-noise experiment, speech was presented from the frontal direction. Speech reception thresholds improved by 15.7 dB SNR when the noise was presented from the cochlear implant side, improved by 7.6 dB SNR when the noise was presented from the hearing aid side, and was not affected when noise was presented from all directions. Apart from bimodal listeners, the method might also be promising for bilateral cochlear implant or hearing aid users. Its low computational complexity makes the method suitable for application in current clinical devices.

Graphical abstract

image


from #Audiology via ola Kala on Inoreader http://ift.tt/2GnQ2Dr
via IFTTT

Head shadow enhancement with low-frequency beamforming improves sound localization and speech perception for simulated bimodal listeners

Publication date: Available online 12 March 2018
Source:Hearing Research
Author(s): Benjamin Dieudonné, Tom Francart
Many hearing-impaired listeners struggle to localize sounds due to poor availability of binaural cues. Listeners with a cochlear implant and a contralateral hearing aid – so-called bimodal listeners – are amongst the worst performers, as both interaural time and level differences are poorly transmitted. We present a new method to enhance head shadow in the low frequencies. Head shadow enhancement is achieved with a fixed beamformer with contralateral attenuation in each ear. The method results in interaural level differences which vary monotonically with angle. It also improves low-frequency signal-to-noise ratios in conditions with spatially separated speech and noise. We validated the method in two experiments with acoustic simulations of bimodal listening. In the localization experiment, performance improved from 50.5∘ to 26.8∘ root-mean-square error compared with standard omni-directional microphones. In the speech-in-noise experiment, speech was presented from the frontal direction. Speech reception thresholds improved by 15.7 dB SNR when the noise was presented from the cochlear implant side, improved by 7.6 dB SNR when the noise was presented from the hearing aid side, and was not affected when noise was presented from all directions. Apart from bimodal listeners, the method might also be promising for bilateral cochlear implant or hearing aid users. Its low computational complexity makes the method suitable for application in current clinical devices.

Graphical abstract

image


from #Audiology via ola Kala on Inoreader http://ift.tt/2GnQ2Dr
via IFTTT

Head shadow enhancement with low-frequency beamforming improves sound localization and speech perception for simulated bimodal listeners

Publication date: Available online 12 March 2018
Source:Hearing Research
Author(s): Benjamin Dieudonné, Tom Francart
Many hearing-impaired listeners struggle to localize sounds due to poor availability of binaural cues. Listeners with a cochlear implant and a contralateral hearing aid – so-called bimodal listeners – are amongst the worst performers, as both interaural time and level differences are poorly transmitted. We present a new method to enhance head shadow in the low frequencies. Head shadow enhancement is achieved with a fixed beamformer with contralateral attenuation in each ear. The method results in interaural level differences which vary monotonically with angle. It also improves low-frequency signal-to-noise ratios in conditions with spatially separated speech and noise. We validated the method in two experiments with acoustic simulations of bimodal listening. In the localization experiment, performance improved from 50.5∘ to 26.8∘ root-mean-square error compared with standard omni-directional microphones. In the speech-in-noise experiment, speech was presented from the frontal direction. Speech reception thresholds improved by 15.7 dB SNR when the noise was presented from the cochlear implant side, improved by 7.6 dB SNR when the noise was presented from the hearing aid side, and was not affected when noise was presented from all directions. Apart from bimodal listeners, the method might also be promising for bilateral cochlear implant or hearing aid users. Its low computational complexity makes the method suitable for application in current clinical devices.

Graphical abstract

image


from #Audiology via ola Kala on Inoreader http://ift.tt/2GnQ2Dr
via IFTTT

Head shadow enhancement with low-frequency beamforming improves sound localization and speech perception for simulated bimodal listeners

Publication date: Available online 12 March 2018
Source:Hearing Research
Author(s): Benjamin Dieudonné, Tom Francart
Many hearing-impaired listeners struggle to localize sounds due to poor availability of binaural cues. Listeners with a cochlear implant and a contralateral hearing aid – so-called bimodal listeners – are amongst the worst performers, as both interaural time and level differences are poorly transmitted. We present a new method to enhance head shadow in the low frequencies. Head shadow enhancement is achieved with a fixed beamformer with contralateral attenuation in each ear. The method results in interaural level differences which vary monotonically with angle. It also improves low-frequency signal-to-noise ratios in conditions with spatially separated speech and noise. We validated the method in two experiments with acoustic simulations of bimodal listening. In the localization experiment, performance improved from 50.5∘ to 26.8∘ root-mean-square error compared with standard omni-directional microphones. In the speech-in-noise experiment, speech was presented from the frontal direction. Speech reception thresholds improved by 15.7 dB SNR when the noise was presented from the cochlear implant side, improved by 7.6 dB SNR when the noise was presented from the hearing aid side, and was not affected when noise was presented from all directions. Apart from bimodal listeners, the method might also be promising for bilateral cochlear implant or hearing aid users. Its low computational complexity makes the method suitable for application in current clinical devices.

Graphical abstract

image


from #Audiology via ola Kala on Inoreader http://ift.tt/2GnQ2Dr
via IFTTT

Head shadow enhancement with low-frequency beamforming improves sound localization and speech perception for simulated bimodal listeners

Publication date: Available online 12 March 2018
Source:Hearing Research
Author(s): Benjamin Dieudonné, Tom Francart
Many hearing-impaired listeners struggle to localize sounds due to poor availability of binaural cues. Listeners with a cochlear implant and a contralateral hearing aid – so-called bimodal listeners – are amongst the worst performers, as both interaural time and level differences are poorly transmitted. We present a new method to enhance head shadow in the low frequencies. Head shadow enhancement is achieved with a fixed beamformer with contralateral attenuation in each ear. The method results in interaural level differences which vary monotonically with angle. It also improves low-frequency signal-to-noise ratios in conditions with spatially separated speech and noise. We validated the method in two experiments with acoustic simulations of bimodal listening. In the localization experiment, performance improved from 50.5∘ to 26.8∘ root-mean-square error compared with standard omni-directional microphones. In the speech-in-noise experiment, speech was presented from the frontal direction. Speech reception thresholds improved by 15.7 dB SNR when the noise was presented from the cochlear implant side, improved by 7.6 dB SNR when the noise was presented from the hearing aid side, and was not affected when noise was presented from all directions. Apart from bimodal listeners, the method might also be promising for bilateral cochlear implant or hearing aid users. Its low computational complexity makes the method suitable for application in current clinical devices.

Graphical abstract

image


from #Audiology via xlomafota13 on Inoreader http://ift.tt/2GnQ2Dr
via IFTTT

WFS1 mutation screening in a large series of Japanese hearing loss patients: Massively parallel DNA sequencing-based analysis.

WFS1 mutation screening in a large series of Japanese hearing loss patients: Massively parallel DNA sequencing-based analysis.

PLoS One. 2018;13(3):e0193359

Authors: Kobayashi M, Miyagawa M, Nishio SY, Moteki H, Fujikawa T, Ohyama K, Sakaguchi H, Miyanohara I, Sugaya A, Naito Y, Morita SY, Kanda Y, Takahashi M, Ishikawa K, Nagano Y, Tono T, Oshikawa C, Kihara C, Takahashi H, Noguchi Y, Usami SI

Abstract
A heterozygous mutation in the Wolfram syndrome type 1 gene (WFS1) causes autosomal dominant nonsyndromic hereditary hearing loss, DFNA6/14/38, or Wolfram-like syndrome. To date, more than 40 different mutations have been reported to be responsible for DFNA6/14/38. In the present study, WFS1 variants were screened in a large series of Japanese hearing loss (HL) patients to clarify the prevalence and clinical characteristics of DFNA6/14/38 and Wolfram-like syndrome. Massively parallel DNA sequencing of 68 target genes was performed in 2,549 unrelated Japanese HL patients to identify genomic variations responsible for HL. The detailed clinical features in patients with WFS1 variants were collected from medical charts and analyzed. We successfully identified 13 WFS1 variants in 19 probands: eight of the 13 variants were previously reported mutations, including three mutations (p.A684V, p.K836N, and p.E864K) known to cause Wolfram-like syndrome, and five were novel mutations. Variants were detected in 15 probands (2.5%) in 602 families with presumably autosomal dominant or mitochondrial HL, and in four probands (0.7%) in 559 sporadic cases; however, no variants were detected in the other 1,388 probands with autosomal recessive or unknown family history. Among the 30 individuals possessing variants, marked variations were observed in the onset of HL as well as in the presence of progressive HL and tinnitus. Vestibular symptoms, which had been rarely reported, were present in 7 out of 30 (23%) of the affected individuals. The most prevalent audiometric configuration was low-frequency type; however, some individuals had high-frequency HL. Haplotype analysis in three mutations (p.A716T, p.K836T, and p.E864K) suggested that the mutations occurred at these mutation hot spots. The present study provided new insights into the audiovestibular phenotypes in patients with WFS1 mutations.

PMID: 29529044 [PubMed - in process]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2IoWY3A
via IFTTT

The sound of migration: exploring data sonification as a means of interpreting multivariate salmon movement datasets.

The sound of migration: exploring data sonification as a means of interpreting multivariate salmon movement datasets.

Heliyon. 2018 Feb;4(2):e00532

Authors: Hegg JC, Middleton J, Robertson BL, Kennedy BP

Abstract
The migration of Pacific salmon is an important part of functioning freshwater ecosystems, but as populations have decreased and ecological conditions have changed, so have migration patterns. Understanding how the environment, and human impacts, change salmon migration behavior requires observing migration at small temporal and spatial scales across large geographic areas. Studying these detailed fish movements is particularly important for one threatened population of Chinook salmon in the Snake River of Idaho whose juvenile behavior may be rapidly evolving in response to dams and anthropogenic impacts. However, exploring movement data sets of large numbers of salmon can present challenges due to the difficulty of visualizing the multivariate, time-series datasets. Previous research indicates that sonification, representing data using sound, has the potential to enhance exploration of multivariate, time-series datasets. We developed sonifications of individual fish movements using a large dataset of salmon otolith microchemistry from Snake River Fall Chinook salmon. Otoliths, a balance and hearing organ in fish, provide a detailed chemical record of fish movements recorded in the tree-like rings they deposit each day the fish is alive. This data represents a scalable, multivariate dataset of salmon movement ideal for sonification. We tested independent listener responses to validate the effectiveness of the sonification tool and mapping methods. The sonifications were presented in a survey to untrained listeners to identify salmon movements with increasingly more fish, with and without visualizations. Our results showed that untrained listeners were most sensitive to transitions mapped to pitch and timbre. Accuracy results were non-intuitive; in aggregate, respondents clearly identified important transitions, but individual accuracy was low. This aggregate effect has potential implications for the use of sonification in the context of crowd-sourced data exploration. The addition of more fish, and visuals, to the sonification increased response time in identifying transitions.

PMID: 29527578 [PubMed]



from #Audiology via ola Kala on Inoreader http://ift.tt/2p4dLBi
via IFTTT

The sound of migration: exploring data sonification as a means of interpreting multivariate salmon movement datasets.

The sound of migration: exploring data sonification as a means of interpreting multivariate salmon movement datasets.

Heliyon. 2018 Feb;4(2):e00532

Authors: Hegg JC, Middleton J, Robertson BL, Kennedy BP

Abstract
The migration of Pacific salmon is an important part of functioning freshwater ecosystems, but as populations have decreased and ecological conditions have changed, so have migration patterns. Understanding how the environment, and human impacts, change salmon migration behavior requires observing migration at small temporal and spatial scales across large geographic areas. Studying these detailed fish movements is particularly important for one threatened population of Chinook salmon in the Snake River of Idaho whose juvenile behavior may be rapidly evolving in response to dams and anthropogenic impacts. However, exploring movement data sets of large numbers of salmon can present challenges due to the difficulty of visualizing the multivariate, time-series datasets. Previous research indicates that sonification, representing data using sound, has the potential to enhance exploration of multivariate, time-series datasets. We developed sonifications of individual fish movements using a large dataset of salmon otolith microchemistry from Snake River Fall Chinook salmon. Otoliths, a balance and hearing organ in fish, provide a detailed chemical record of fish movements recorded in the tree-like rings they deposit each day the fish is alive. This data represents a scalable, multivariate dataset of salmon movement ideal for sonification. We tested independent listener responses to validate the effectiveness of the sonification tool and mapping methods. The sonifications were presented in a survey to untrained listeners to identify salmon movements with increasingly more fish, with and without visualizations. Our results showed that untrained listeners were most sensitive to transitions mapped to pitch and timbre. Accuracy results were non-intuitive; in aggregate, respondents clearly identified important transitions, but individual accuracy was low. This aggregate effect has potential implications for the use of sonification in the context of crowd-sourced data exploration. The addition of more fish, and visuals, to the sonification increased response time in identifying transitions.

PMID: 29527578 [PubMed]



from #Audiology via ola Kala on Inoreader http://ift.tt/2p4dLBi
via IFTTT