Τετάρτη 11 Μαΐου 2016

Physiological Evidence for a Midline Spatial Channel in Human Auditory Cortex

Abstract

Studies with humans and other mammals have provided support for a two-channel representation of horizontal (“azimuthal”) space in the auditory system. In this representation, location-sensitive neurons contribute activity to one of two broadly tuned channels whose responses are compared to derive an estimate of sound-source location. One channel is maximally responsive to sounds towards the left and the other to sounds towards the right. However, recent psychophysical studies of humans, and physiological studies of other mammals, point to the presence of an additional channel, maximally responsive to the midline. In this study, we used electroencephalography to seek physiological evidence for such a midline channel in humans. We measured neural responses to probe stimuli presented from straight ahead (0 °) or towards the right (+30 ° or +90 °). Probes were preceded by adapter stimuli to temporarily suppress channel activity. Adapters came from 0 ° or alternated between left and right (−30 ° and +30 ° or −90 ° and +90 °). For the +90 ° probe, to which the right-tuned channel would respond most strongly, both accounts predict greatest adaptation when the adapters are at ±90 °. For the 0 ° probe, the two-channel account predicts greatest adaptation from the ±90 ° adapters, while the three-channel account predicts greatest adaptation when the adapters are at 0 ° because these adapters stimulate the midline-tuned channel which responds most strongly to the 0 ° probe. The results were consistent with the three-channel account. In addition, a computational implementation of the three-channel account fitted the probe response sizes well, explaining 93 % of the variance about the mean, whereas a two-channel implementation produced a poor fit and explained only 61 % of the variance.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1ZEfJ4B
via IFTTT

Does Working Memory Enhance or Interfere With Speech Fluency in Adults Who Do and Do Not Stutter? Evidence From a Dual-Task Paradigm

Purpose
The present study examined whether engaging working memory in a secondary task benefits speech fluency. Effects of dual-task conditions on speech fluency, rate, and errors were examined with respect to predictions derived from three related theoretical accounts of disfluencies.
Method
Nineteen adults who stutter and twenty adults who do not stutter participated in the study. All participants completed 2 baseline tasks: a continuous-speaking task and a working-memory (WM) task involving manipulations of domain, load, and interstimulus interval. In the dual-task portion of the experiment, participants simultaneously performed the speaking task with each unique combination of WM conditions.
Results
All speakers showed similar fluency benefits and decrements in WM accuracy as a result of dual-task conditions. Fluency effects were specific to atypical forms of disfluency and were comparable across WM-task manipulations. Changes in fluency were accompanied by reductions in speaking rate but not by corresponding changes in overt errors.
Conclusions
Findings suggest that WM contributes to disfluencies regardless of stuttering status and that engaging WM resources while speaking enhances fluency. Further research is needed to verify the cognitive mechanism involved in this effect and to determine how these findings can best inform clinical intervention.

from #Audiology via ola Kala on Inoreader http://ift.tt/1T5F0El
via IFTTT

Does Working Memory Enhance or Interfere With Speech Fluency in Adults Who Do and Do Not Stutter? Evidence From a Dual-Task Paradigm

Purpose
The present study examined whether engaging working memory in a secondary task benefits speech fluency. Effects of dual-task conditions on speech fluency, rate, and errors were examined with respect to predictions derived from three related theoretical accounts of disfluencies.
Method
Nineteen adults who stutter and twenty adults who do not stutter participated in the study. All participants completed 2 baseline tasks: a continuous-speaking task and a working-memory (WM) task involving manipulations of domain, load, and interstimulus interval. In the dual-task portion of the experiment, participants simultaneously performed the speaking task with each unique combination of WM conditions.
Results
All speakers showed similar fluency benefits and decrements in WM accuracy as a result of dual-task conditions. Fluency effects were specific to atypical forms of disfluency and were comparable across WM-task manipulations. Changes in fluency were accompanied by reductions in speaking rate but not by corresponding changes in overt errors.
Conclusions
Findings suggest that WM contributes to disfluencies regardless of stuttering status and that engaging WM resources while speaking enhances fluency. Further research is needed to verify the cognitive mechanism involved in this effect and to determine how these findings can best inform clinical intervention.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1T5F0El
via IFTTT

Does Working Memory Enhance or Interfere With Speech Fluency in Adults Who Do and Do Not Stutter? Evidence From a Dual-Task Paradigm

Purpose
The present study examined whether engaging working memory in a secondary task benefits speech fluency. Effects of dual-task conditions on speech fluency, rate, and errors were examined with respect to predictions derived from three related theoretical accounts of disfluencies.
Method
Nineteen adults who stutter and twenty adults who do not stutter participated in the study. All participants completed 2 baseline tasks: a continuous-speaking task and a working-memory (WM) task involving manipulations of domain, load, and interstimulus interval. In the dual-task portion of the experiment, participants simultaneously performed the speaking task with each unique combination of WM conditions.
Results
All speakers showed similar fluency benefits and decrements in WM accuracy as a result of dual-task conditions. Fluency effects were specific to atypical forms of disfluency and were comparable across WM-task manipulations. Changes in fluency were accompanied by reductions in speaking rate but not by corresponding changes in overt errors.
Conclusions
Findings suggest that WM contributes to disfluencies regardless of stuttering status and that engaging WM resources while speaking enhances fluency. Further research is needed to verify the cognitive mechanism involved in this effect and to determine how these findings can best inform clinical intervention.

from #Audiology via ola Kala on Inoreader http://ift.tt/1T5F0El
via IFTTT

Receptive language as a predictor of cochlear implant outcome for prelingually deaf adults

10.3109/14992027.2016.1157269<br/>Alexandra Rousset

from #Audiology via xlomafota13 on Inoreader http://ift.tt/24Luy9a
via IFTTT

Receptive language as a predictor of cochlear implant outcome for prelingually deaf adults

10.3109/14992027.2016.1157269<br/>Alexandra Rousset

from #Audiology via ola Kala on Inoreader http://ift.tt/24Luy9a
via IFTTT

Receptive language as a predictor of cochlear implant outcome for prelingually deaf adults

10.3109/14992027.2016.1157269<br/>Alexandra Rousset

from #Audiology via ola Kala on Inoreader http://ift.tt/24Luy9a
via IFTTT

Experimental validation of a nonlinear derating technique based upon Gaussian-modal representation of focused ultrasound beams

cm_sbs_024_plain.png

A technique useful for performing derating at acoustic powers where significant harmonic generation occurs is illustrated and validated with experimental measurements. The technique was previously presented using data from simulations. The method is based upon a Gaussian representation of the propagation modes, resulting in simple expressions for the modal quantities, but a Gaussian source is not required. The nonlinear interaction of modes within tissue is estimated from the nonlinear interaction in water, using appropriate amounts of source reduction and focal-point reduction derived from numerical simulations. An important feature of this nonlinear derating method is that focal temperatures can be estimated with little additional effort beyond that required to determine the focal pressure waveforms. Hydrophonemeasurements made in water were used to inform the derating algorithm, and the resulting pressure waveforms and increases in temperature were compared with values directly measured in tissue phantoms. For a 1.05 MHz focused transducer operated at 80 W and 128 W, the derated pressures (peak positive, peak negative) agreed with the directly measured values to within 11%. Focal temperature rises determined by the derating method agreed with values measured using a remote thermocouple technique with a difference of 17%.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1T8Jjkb
via IFTTT

A meshless method for unbounded acoustic problems

cm_sbs_024_plain.png

In this paper an effective meshless method is proposed to solve time-harmonic acoustic problems defined on unbounded domains. To this end, the near field is discretized by a set of nodes and the far field effect is taken into account by considering radiative boundary conditions. The approximation within the near field is performed using a set of local residual-free basis functions defined on a series of finite clouds. For considering the far field effect, a series of infinite clouds are defined on which another set of residual-free bases, satisfying the radiation conditions, are considered for the approximation. Validation of the results is performed through solving some acoustic problems.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1YlPG1R
via IFTTT

Large-scale training to increase speech intelligibility for hearing-impaired listeners in novel noises

cm_sbs_024_plain.png

Supervised speech segregation has been recently shown to improve human speech intelligibility in noise, when trained and tested on similar noises. However, a major challenge involves the ability to generalize to entirely novel noises. Such generalization would enable hearing aid and cochlear implant users to improve speech intelligibility in unknown noisy environments. This challenge is addressed in the current study through large-scale training. Specifically, a deep neural network (DNN) was trained on 10 000 noises to estimate the ideal ratio mask, and then employed to separate sentences from completely new noises (cafeteria and babble) at several signal-to-noise ratios (SNRs). Although the DNN was trained at the fixed SNR of 2 dB, testing using hearing-impaired listeners demonstrated that speech intelligibility increased substantially following speech segregation using the novel noises and unmatched SNR conditions of 0 dB and 5 dB. Sentence intelligibility benefit was also observed for normal-hearing listeners in most noisy conditions. The results indicate that DNN-based supervised speech segregation with large-scale training is a very promising approach for generalization to new acoustic environments.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1T8JhJ8
via IFTTT

The multiple contributions of interaural differences to improved speech intelligibility in multitalker scenariosa)

cm_sbs_024_plain.png

Spatial separation of talkers is known to improve speech intelligibility in a multitalker scenario. A contribution of binaural unmasking, in addition to a better-ear effect, is usually considered to account for this advantage. Binaural unmasking is assumed to result from the spectro-temporally simultaneous presence of target and masker energy with different interaural properties. However, in the case of speech targets and speech interference, the spectro-temporal signal-to-noise ratio (SNR) fluctuates strongly, resulting in audible and localizable glimpses of target speech even at adverse global SNRs. The disparate interaural properties of target and masker may thus lead to improved segregation without requiring simultaneity. This study addresses the binaural contribution to spatial release from masking due to simultaneous disparities in interaural cues between target and interferers. For that purpose stimuli were designed that lacked simultaneously occurring disparities, but yielded a percept of spatially separated speech nearly indistinguishable from that of non-modified stimuli. A phoneme recognition experiment with either three collocated or spatially separated talkers showed a substantial spatial release from masking for the modified stimuli. The results suggest that binaural unmasking made a minor contribution to spatial release from masking, and that rather the interaural cues mediated by dominant speech components were essential.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1YlPHCZ
via IFTTT

Size-isolation of ultrasound-mediated phase change perfluorocarbon droplets using differential centrifugation

cm_sbs_024_plain.png

Perfluorocarbon droplets that are capable of an ultrasound-mediated phase transition have applications in diagnostic and therapeuticultrasound. Techniques to modify the droplet size distribution are of interest because of the size-dependent acoustic response of the droplets. Differential centrifugation has been used to isolate specific sizes of microbubbles. In this work, differential centrifugation was employed to isolate droplets with diameters between 1 and 3 μm and 2 and 5 μm from an initially polydisperse distribution. Further, an empirical model was developed for predicting the droplet size distribution following differential centrifugation and to facilitate the selection of centrifugation parameters for obtaining desired size distributions.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1rVxAJt
via IFTTT

Size-isolation of ultrasound-mediated phase change perfluorocarbon droplets using differential centrifugation

Perfluorocarbon droplets that are capable of an ultrasound-mediated phase transition have applications in diagnostic and therapeuticultrasound. Techniques to modify the droplet size distribution are of interest because of the size-dependent acoustic response of the droplets. Differential centrifugation has been used to isolate specific sizes of microbubbles. In this work, differential centrifugation was employed to isolate droplets with diameters between 1 and 3 μm and 2 and 5 μm from an initially polydisperse distribution. Further, an empirical model was developed for predicting the droplet size distribution following differential centrifugation and to facilitate the selection of centrifugation parameters for obtaining desired size distributions.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1rVxAJt
via IFTTT

The role of mitochondrial sirtuins in health and disease.

Related Articles

The role of mitochondrial sirtuins in health and disease.

Free Radic Biol Med. 2016 May 6;

Authors: Osborne B, Bentley NL, Montgomery MK, Turner N

Abstract
Mitochondria play a critical role in energy production, cell signalling and cell survival. Defects in mitochondrial function contribute to the ageing process and ageing-related disorders such as metabolic disease, cancer, and neurodegeneration. The sirtuin family of deacylase enzymes have a variety of subcellular localisations and have been found to remove a growing list of post-translational acyl modifications from target proteins. SIRT3, SIRT4, and SIRT5 are found primarily located in the mitochondria, and are involved in many of the key processes of this organelle. SIRT3 has been the subject of intense research and is primarily a deacetylase thought to function as a mitochondrial fidelity protein, with roles in mitochondrial substrate metabolism, protection against oxidative stress, and cell survival pathways. Less is known about the functional targets of SIRT4, which has deacetylase, ADP-ribosylase, and a newly-described lipoamidase function, although key roles in lipid and glutamine metabolism have been reported. SIRT5 modulates a host of newly-discovered acyl modifications including succinylation, malonylation, and glutarylation in both mitochondrial and extra-mitochondrial compartments, however the functional significance of SIRT5 in the regulation of many of its proposed target proteins remains to be discovered. Because of their influence on a broad range of pathways, SIRT3, SIRT4, and SIRT5 are implicated in a range of disease-states including metabolic disease such as diabetes, neurodegenerative diseases, cancer, and ageing-related disorders such as hearing-loss and cardiac dysfunction. We review the current knowledge on the function of the three mitochondrial sirtuins, their role in disease, and the current outstanding questions in the field.

PMID: 27164052 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1OnvcQA
via IFTTT

KCNK5 channels mostly expressed in cochlear outer sulcus cells are indispensable for hearing.

http:--http://ift.tt/1feGv2I http:--http://ift.tt/1Fkw4zC Related Articles

KCNK5 channels mostly expressed in cochlear outer sulcus cells are indispensable for hearing.

Nat Commun. 2015;6:8780

Authors: Cazals Y, Bévengut M, Zanella S, Brocard F, Barhanin J, Gestreau C

Abstract
In the cochlea, K(+) is essential for mechano-electrical transduction. Here, we explore cochlear structure and function in mice lacking K(+) channels of the two-pore domain family. A profound deafness associated with a decrease in endocochlear potential is found in adult Kcnk5(-/-) mice. Hearing occurs around postnatal day 19 (P19), and completely disappears 2 days later. At P19, Kcnk5(-/-) mice have a normal endolymphatic [K(+)] but a partly lowered endocochlear potential. Using Lac-Z as a gene reporter, KCNK5 is mainly found in outer sulcus Claudius', Boettcher's and root cells. Low levels of expression are also seen in the spiral ganglion, Reissner's membrane and stria vascularis. Essential channels (KCNJ10 and KCNQ1) contributing to K(+) secretion in stria vascularis have normal expression in Kcnk5(-/-) mice. Thus, KCNK5 channels are indispensable for the maintenance of hearing. Among several plausible mechanisms, we emphasize their role in K(+) recycling along the outer sulcus lateral route.

PMID: 26549439 [PubMed - indexed for MEDLINE]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1OnvcAn
via IFTTT

The Effect of False Vocal Folds on Laryngeal Flow Resistance in a Tubular Three-dimensional Computational Laryngeal Model

Publication date: Available online 10 May 2016
Source:Journal of Voice
Author(s): Qian Xue, Xudong Zheng
ObjectiveThe current study used a three-dimensional (3D) computational laryngeal model to investigate the effect of false vocal folds (FVFs) on laryngeal flow resistance.MethodA 3D, tubular shaped computational laryngeal model was designed with a high level of realism with respect to the human laryngeal anatomy. Two cases, one with the FVFs and the other without the FVFs, were created in the numerical simulation to compare the laryngeal flow behaviors.Results and ConclusionThe results were discussed in a comparative manner with the previous two-dimensional (2D) computational model. On the one hand, the results demonstrated the similar mechanism as observed in the 2D model that the presence of the FVFs suppressed the deflection of the glottal jet and in doing so, reduced the mixing-related minor loss in the supraglottal region. On the other hand, the 3D flow was more stable and straighter, so the effect of FVFs on suppressing the jet deflection in the 3D model was not as prominent as in the 2D model. Furthermore, the presence of the FVFs also increased the friction-related major loss due to the increased velocity gradient in the restricted flow channel. Therefore, it was hypothesized that the final effect of the FVFs on flow resistance is the combined effect of the reduced mixing-related minor loss and increased friction-related major loss, both of which are highly related to the gap between the FVFs.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1USvl5h
via IFTTT

Investigation of the Immediate Effects of Humming on Vocal Fold Vibration Irregularity Using Electroglottography and High-speed Laryngoscopy in Patients With Organic Voice Disorders

Publication date: Available online 10 May 2016
Source:Journal of Voice
Author(s): Carien Vlot, Makoto Ogawa, Kiyohito Hosokawa, Toshihiko Iwahashi, Chieri Kato, Hidenori Inohara
ObjectivesThe study aimed to investigate whether humming can immediately improve the regularity of vocal fold vibration on electroglottography (EGG) and laryngeal high-speed digital imaging (HSDI) in patients with organic dysphonia (OD).MethodsIn a series of 49 dysphonic patients who were diagnosed to have benign mass lesions in the vocal folds and an equal number of non-dysphonic speakers, perturbation parameters were calculated on the acoustic (Ac) and EGG signals during natural and humming phonation. In addition, 11 OD patients and as many non-dysphonic speakers underwent simultaneous EGG and HSDI video recording under laryngofiberscopy while performing the two tasks. The perturbation parameters of the EGG signals as well as the glottal area waveforms (GAW), which were extracted from the HSDI movies, were calculated, and the correlations between both perturbation parameters were analyzed.ResultsHumming achieved significant improvements in the EGG perturbation parameters in both groups. More than half of the OD patients showed decreased EGG perturbation parameters to the level of those during natural phonation in the control group. With respect to the GAW analysis, moderate correlations were observed between both period and amplitude perturbation parameters (period: r = 0.63, amplitude: r = 0.41). Humming decreased both GAW perturbation parameters significantly in the OD and control subjects combined.ConclusionsThese results demonstrate that in OD patients, humming has a potential to improve voice quality by stabilizing the vocal fold oscillation, and suggest that humming can remove the functional component in the vocal disturbance instead of the mechanical effect of the mass lesions.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1rVcpY0
via IFTTT

Receptive language as a predictor of cochlear implant outcome for prelingually deaf adults.

Receptive language as a predictor of cochlear implant outcome for prelingually deaf adults.

Int J Audiol. 2016 May 10;:1-7

Authors: Rousset A, Dowell R, Leigh J

Abstract
OBJECTIVE: This study investigated outcomes and predictive factors, specifically language skills, for a group of prelingually hearing-impaired adults who received a cochlear implant.
DESIGN: Speech perception data, demographic information, and other related variables such as communication mode, residual hearing, and receptive language abilities were explored. Pre- and post-implant speech perception scores were compared and multiple regression analysis was used to identify significant predictive relationships.
STUDY SAMPLE: The study included 43 adults with a prelingual onset of hearing loss, who proceeded with cochlear implantation at the Royal Victorian Eye and Ear Hospital in Melbourne, Australia.
RESULTS: The majority of patients experienced benefit from their cochlear implants, with 88% demonstrating significant improvement in speech perception performance. Volunteers achieved better post-operative speech perception scores if they had a shorter duration of severe-to-profound hearing loss, better language skills, and used an exclusively oral communication mode.
CONCLUSIONS: Although post-operative speech perception performance is significantly poorer for prelingually hearing-impaired adults compared to postlingually hearing-impaired patients, the study group demonstrated significant benefit from their cochlear implants. The variability in post-operative outcomes can be predicted to some extent from the hearing history and language abilities of the individual patient.

PMID: 27160793 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/24KDtaM
via IFTTT

Receptive language as a predictor of cochlear implant outcome for prelingually deaf adults.

Receptive language as a predictor of cochlear implant outcome for prelingually deaf adults.

Int J Audiol. 2016 May 10;:1-7

Authors: Rousset A, Dowell R, Leigh J

Abstract
OBJECTIVE: This study investigated outcomes and predictive factors, specifically language skills, for a group of prelingually hearing-impaired adults who received a cochlear implant.
DESIGN: Speech perception data, demographic information, and other related variables such as communication mode, residual hearing, and receptive language abilities were explored. Pre- and post-implant speech perception scores were compared and multiple regression analysis was used to identify significant predictive relationships.
STUDY SAMPLE: The study included 43 adults with a prelingual onset of hearing loss, who proceeded with cochlear implantation at the Royal Victorian Eye and Ear Hospital in Melbourne, Australia.
RESULTS: The majority of patients experienced benefit from their cochlear implants, with 88% demonstrating significant improvement in speech perception performance. Volunteers achieved better post-operative speech perception scores if they had a shorter duration of severe-to-profound hearing loss, better language skills, and used an exclusively oral communication mode.
CONCLUSIONS: Although post-operative speech perception performance is significantly poorer for prelingually hearing-impaired adults compared to postlingually hearing-impaired patients, the study group demonstrated significant benefit from their cochlear implants. The variability in post-operative outcomes can be predicted to some extent from the hearing history and language abilities of the individual patient.

PMID: 27160793 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/24KDtaM
via IFTTT

Receptive language as a predictor of cochlear implant outcome for prelingually deaf adults.

Receptive language as a predictor of cochlear implant outcome for prelingually deaf adults.

Int J Audiol. 2016 May 10;:1-7

Authors: Rousset A, Dowell R, Leigh J

Abstract
OBJECTIVE: This study investigated outcomes and predictive factors, specifically language skills, for a group of prelingually hearing-impaired adults who received a cochlear implant.
DESIGN: Speech perception data, demographic information, and other related variables such as communication mode, residual hearing, and receptive language abilities were explored. Pre- and post-implant speech perception scores were compared and multiple regression analysis was used to identify significant predictive relationships.
STUDY SAMPLE: The study included 43 adults with a prelingual onset of hearing loss, who proceeded with cochlear implantation at the Royal Victorian Eye and Ear Hospital in Melbourne, Australia.
RESULTS: The majority of patients experienced benefit from their cochlear implants, with 88% demonstrating significant improvement in speech perception performance. Volunteers achieved better post-operative speech perception scores if they had a shorter duration of severe-to-profound hearing loss, better language skills, and used an exclusively oral communication mode.
CONCLUSIONS: Although post-operative speech perception performance is significantly poorer for prelingually hearing-impaired adults compared to postlingually hearing-impaired patients, the study group demonstrated significant benefit from their cochlear implants. The variability in post-operative outcomes can be predicted to some extent from the hearing history and language abilities of the individual patient.

PMID: 27160793 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/24KDtaM
via IFTTT

Receptive language as a predictor of cochlear implant outcome for prelingually deaf adults.

Receptive language as a predictor of cochlear implant outcome for prelingually deaf adults.

Int J Audiol. 2016 May 10;:1-7

Authors: Rousset A, Dowell R, Leigh J

Abstract
OBJECTIVE: This study investigated outcomes and predictive factors, specifically language skills, for a group of prelingually hearing-impaired adults who received a cochlear implant.
DESIGN: Speech perception data, demographic information, and other related variables such as communication mode, residual hearing, and receptive language abilities were explored. Pre- and post-implant speech perception scores were compared and multiple regression analysis was used to identify significant predictive relationships.
STUDY SAMPLE: The study included 43 adults with a prelingual onset of hearing loss, who proceeded with cochlear implantation at the Royal Victorian Eye and Ear Hospital in Melbourne, Australia.
RESULTS: The majority of patients experienced benefit from their cochlear implants, with 88% demonstrating significant improvement in speech perception performance. Volunteers achieved better post-operative speech perception scores if they had a shorter duration of severe-to-profound hearing loss, better language skills, and used an exclusively oral communication mode.
CONCLUSIONS: Although post-operative speech perception performance is significantly poorer for prelingually hearing-impaired adults compared to postlingually hearing-impaired patients, the study group demonstrated significant benefit from their cochlear implants. The variability in post-operative outcomes can be predicted to some extent from the hearing history and language abilities of the individual patient.

PMID: 27160793 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/24KDtaM
via IFTTT

An Exploration of Methods for Rating Children's Productions of Sibilant Fricatives.

Related Articles

An Exploration of Methods for Rating Children's Productions of Sibilant Fricatives.

Speech Lang Hear. 2016;19(1):36-45

Authors: Munson B, Carlson KU

Abstract
This paper examines three methods for providing ratings of within-category detail in children's productions of /s/ and /ʃ/. A group of listeners (n=61) participated in a rating task in which a forced-choice phoneme identification task was followed by one of three measures of phoneme goodness: visual analog scaling, direct magnitude estimation, or a Likert scale judgment. All three types of ratings were similarly correlated with sounds' acoustic characteristics. Visual analog scaling and Likert scale judgments had higher intra-rater reliability than did direct magnitude estimation. Moreover, both of them elicited a wider range of judgments than did direct magnitude estimation. Based on our evaluation, Likert scale judgments and visual analog scaling are equally useful tasks for eliciting within-category judgments. Of these two, visual analog scaling may be preferable because it allows for more distinct levels of response.

PMID: 27158499 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/24JYBOy
via IFTTT

An Exploration of Methods for Rating Children's Productions of Sibilant Fricatives.

Related Articles

An Exploration of Methods for Rating Children's Productions of Sibilant Fricatives.

Speech Lang Hear. 2016;19(1):36-45

Authors: Munson B, Carlson KU

Abstract
This paper examines three methods for providing ratings of within-category detail in children's productions of /s/ and /ʃ/. A group of listeners (n=61) participated in a rating task in which a forced-choice phoneme identification task was followed by one of three measures of phoneme goodness: visual analog scaling, direct magnitude estimation, or a Likert scale judgment. All three types of ratings were similarly correlated with sounds' acoustic characteristics. Visual analog scaling and Likert scale judgments had higher intra-rater reliability than did direct magnitude estimation. Moreover, both of them elicited a wider range of judgments than did direct magnitude estimation. Based on our evaluation, Likert scale judgments and visual analog scaling are equally useful tasks for eliciting within-category judgments. Of these two, visual analog scaling may be preferable because it allows for more distinct levels of response.

PMID: 27158499 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/24JYBOy
via IFTTT