Σάββατο 1 Σεπτεμβρίου 2018

Human frequency following responses to iterated rippled noise with positive and negative gain: Differential sensitivity to waveform envelope and temporal fine-structure

Publication date: September 2018

Source: Hearing Research, Volume 367

Author(s): Saradha Ananthakrishnan, Ananthanarayan Krishnan

Abstract

The perceived pitch of iterated rippled noise (IRN) with negative gain (IRNn) is an octave lower than that of IRN with positive gain (IRNp). IRNp and IRNn have identical waveform envelopes (ENV), but differing stimulus waveform fine structure (TFS), which likely accounts for this perceived pitch difference. Here, we examine whether differences in the temporal pattern of phase-locked activity reflected in the human brainstem Frequency Following Response (FFR) elicited by IRNp and IRNn can account for the differences in perceived pitch for the two stimuli. FFRs using a single onset polarity were measured in 13 normal-hearing, adult listeners in response to IRNp and IRNn stimuli with 2 ms, and 4 ms delay. Autocorrelation functions (ACFs) and Fast Fourier Transforms (FFTs) were used to evaluate the dominant periodicity and spectral pattern (harmonic spacing) in the phase-locked FFR neural activity. For both delays, the harmonic spacing in the spectra corresponded more strongly with the perceived lowering of pitch from IRNp to IRNn, compared to the ACFs. These results suggest that the FFR elicited by a single polarity stimulus reflects phase-locking to both stimulus ENV and TFS. A post-hoc experiment evaluating the FFR phase-locked activity to ENV (FFRENV), and TFS (FFRTFS) elicited by IRNp and IRNn confirmed that only the phase-locked activity to the TFS, reflected in FFRTFS, showed differences in both spectra and ACF that closely matched the pitch difference between the two stimuli. The results of the post-hoc experiment suggests that pitch-relevant information is preserved in the temporal pattern of phase-locked activity and suggests that the differences in stimulus ENV and TFS driving the pitch percept of IRNp and IRNn are preserved in the brainstem neural response. The scalp recorded FFR may provide for a noninvasive analytic tool to evaluate the relative contributions of envelope and temporal fine-structure in the neural representation of complex sounds in humans.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Or4mPe
via IFTTT

Detection of single mRNAs in individual cells of the auditory system

Publication date: September 2018

Source: Hearing Research, Volume 367

Author(s): Pezhman Salehi, Charlie N. Nelson, Yingying Chen, Debin Lei, Samuel D. Crish, Jovitha Nelson, Hongyan Zuo, Jianxin Bao

Abstract

Gene expression analysis is essential for understanding the rich repertoire of cellular functions. With the development of sensitive molecular tools such as single-cell RNA sequencing, extensive gene expression data can be obtained and analyzed from various tissues. Single-molecule fluorescence in situ hybridization (smFISH) has emerged as a powerful complementary tool for single-cell genomics studies because of its ability to map and quantify the spatial distributions of single mRNAs at the subcellular level in their native tissue. Here, we present a detailed method to study the copy numbers and spatial localizations of single mRNAs in the cochlea and inferior colliculus. First, we demonstrate that smFISH can be performed successfully in adult cochlear tissue after decalcification. Second, we show that the smFISH signals can be detected with high specificity. Third, we adapt an automated transcript analysis pipeline to quantify and identify single mRNAs in a cell-specific manner. Lastly, we show that our method can be used to study possible correlations between transcriptional and translational activities of single genes. Thus, we have developed a detailed smFISH protocol that can be used to study the expression of single mRNAs in specific cell types of the peripheral and central auditory systems.



from #Audiology via ola Kala on Inoreader https://ift.tt/2K6SrCJ
via IFTTT

Editorial Board

Publication date: September 2018

Source: Hearing Research, Volume 367

Author(s):



from #Audiology via ola Kala on Inoreader https://ift.tt/2OLbqFy
via IFTTT

Editorial introduction: The 6th International Conference on Auditory Cortex

Publication date: September 2018

Source: Hearing Research, Volume 366

Author(s): Blake E. Butler, Yale E. Cohen, Stephen G. Lomber



from #Audiology via ola Kala on Inoreader https://ift.tt/2BjX6lw
via IFTTT

Editorial Board

Publication date: September 2018

Source: Hearing Research, Volume 366

Author(s):



from #Audiology via ola Kala on Inoreader https://ift.tt/2MDHgqE
via IFTTT

Stimulus-specific adaptation in the anesthetized mouse revealed by brainstem auditory evoked potentials

Publication date: Available online 31 August 2018

Source: Hearing Research

Author(s): Daniel Duque, Rui Pais, Manuel S. Malmierca

Abstract

Neural responses to sensory inputs in a complex and natural environment must be weighted according to their relevance. To do so, the brain needs to be able to deal with sudden stimulus fluctuations in an ever-changing acoustic environment. Stimulus-specific adaptation (SSA) is a phenomenon of some neurons along the auditory pathway that show a reduced response to repetitive sounds while responsive to those that occur rarely. SSA has been shown from the inferior colliculus to auditory cortex, but has not been detected in the cochlear nucleus. To discover where SSA is first generated along the auditory pathway, auditory brainstem responses (ABRs) to pure tones were evaluated in anesthetized mice using an oddball paradigm. Using a typical narrow band-pass filter, changes in the ABRs suggest unspecific short-term adaptation may occur as early as the auditory nerve fibers. Furthermore, after applying a wide band-pass filter –allowing the visualization of a late slow wave in the ABR– we found a reduction of the amplitude of the response to repetitive sounds, compared to rare ones, in the slow wave component P0 that follow the fast wave V. Previous studies have shown the P0 shows temporal correlation with the sustained responses of inferior colliculus, thus we suggest that this nucleus is the first to show stimulus specific adaptation in the auditory pathway.

Graphical abstract

Image 1



from #Audiology via ola Kala on Inoreader https://ift.tt/2wuyamA
via IFTTT

Brain-like Emergent Auditory Learning: A Developmental Method

Publication date: Available online 31 August 2018

Source: Hearing Research

Author(s): Dongshu Wang, Hui Shan, Jianbin Xin

Abstract

Compared with machine audition, the human auditory system can recognize speech accurately and quickly. This paper proposes a new developmental network (DN) that simulates the human auditory system and constructs an artificial auditory model for speech recognition. The new model simulates each key element of the human auditory pathway as a deep network; in particular, an additional layer in the network is considered to simulate the function of the superior colliculus in the thalamus for speech context integration. The mel-frequency cepstral coefficient (MFCC) is used to extract the features of the speech signal as the sensory input of the DN. The emergent feature of DN model provides an explanation of how such internal neurons represent the short speech context when they are not supervised by the external world. The experimental results show that the recognition rates of English words and phrases can be improved significantly compared to those reported in the existing literature. The proposed DN model provides a new method to solve difficult problems, such as universal speech recognition, in traditional machine audition systems. Meanwhile, the same learning principle can potentially be used in or adapted to other computational contexts and applications.



from #Audiology via ola Kala on Inoreader https://ift.tt/2NAYNwK
via IFTTT

Development and validation of a method to record electrophysiological responses in direct acoustic cochlear implant subjects

Publication date: Available online 28 August 2018

Source: Hearing Research

Author(s): Hanne Deprez, Robin Gransier, Michael Hofmann, Jan Wouters, Nicolas Verhaert

Abstract

Acoustic hearing implants, such as direct acoustic cochlear implants (DACIs), can be used to treat profound mixed hearing loss. Electrophysiological responses in DACI subjects are of interest to confirm auditory processing intra-operatively, and to assist DACI fitting post-operatively. We present two related studies, focusing on DACI artifacts and electrophysiological measurements in DACI subjects, respectively. In the first study we aimed to characterize DACI artifacts, to study the feasibility of measuring frequency-specific electrophysiological responses in DACI subjects. Measurements of DACI artifacts were collected in a cadaveric head to disentangle possible DACI artifact sources and compared to a constructed DACI artifact template. It is shown that for moderate stimulation levels, DACI artifacts are mainly dominated by the artifact from the radio frequency (RF) communication signal, that can be modeled if the RF encoding protocol is known. In a second study, the feasibility of measuring intra-operative responses, without applying the RF artifact models, in DACI subjects is investigated. Auditory steady-state and brainstem responses were measured intra-operatively in three DACI subjects, immediately after implantation, to confirm proper DACI functioning and coupling to the inner ear. Intra-operative responses could be measured in two of the three tested subjects. Absence of intra-operative responses in the third subject can possibly be explained by the hearing loss, attenuation of intra-operative responses, the difference between electrophysiological and behavioral threshold, and a temporary threshold shift due to the DACI surgery. In conclusion, RF artifacts can be modeled, such that electrophysiological responses to frequency-specific stimuli could possibly be measured in DACI subjects, and intra-operative responses in DACI subjects can be obtained.



from #Audiology via ola Kala on Inoreader https://ift.tt/2BTlmv4
via IFTTT

Improved directional hearing of children with congenital unilateral conductive hearing loss implanted with an active bone-conduction implant or an active middle ear implant

Publication date: Available online 26 August 2018

Source: Hearing Research

Author(s): K. Vogt, H. Frenzel, S.A. Ausili, D. Hollfelder, B. Wollenberg, A.F.M. Snik, M.J.H. Agterberg

Abstract

Different amplification options are available for listeners with congenital unilateral conductive hearing loss (UCHL). For example, bone-conduction devices (BCDs) and middle ear implants. The present study investigated whether intervention with an active BCD, the Bonebridge, or a middle ear implant, the Vibrant Soundbridge (VSB), affected sound-localization performance of listeners with congenital UCHL. Listening with a Bonebridge or VSB might provide access to binaural cues. However, when fitted with the Bonebridge, but not with a VSB, binaural processing might be affected through cross stimulation of the contralateral normal hearing ear, and could interfere with processing of binaural cues. In the present study twenty-three listeners with congenital UCHL were included. To assess processing of binaural cues, we investigated localization abilities of broadband (BB, 0.5–20 kHz) filtered noise presented at varying sound levels. Sound localization abilities were analyzed separately for stimuli presented at the side of the normal-hearing ear, and for stimuli presented at the side of the hearing-impaired ear. Twenty-six normal hearing children and young adults were tested as control listeners. Sound localization abilities were measured under open-loop conditions by recording head-movement responses. We demonstrate improved sound localization abilities of children with congenital UCHL, when listening with a Bonebridge or VSB, predominantly for stimuli presented at the impaired (aided) side. Our results suggest that the improvement is not related to accurate processing of binaural cues. When listening with the Bonebridge, despite cross stimulation of the contralateral cochlea, localization performance was not deteriorated compared to listening with a VSB.



from #Audiology via ola Kala on Inoreader https://ift.tt/2PI6WRu
via IFTTT

Quantitative distribution of choline acetyltransferase activity in rat trapezoid body

Publication date: Available online 25 August 2018

Source: Hearing Research

Author(s): Lauren A. Linker, Lissette Carlson, Donald A. Godfrey, Judy A. Parli, C. David Ross

Abstract

There is evidence for a function of acetylcholine in the cochlear nucleus, primarily in a feedback, modulatory effect on auditory processing. Using a microdissection and quantitative microassay approach, choline acetyltransferase activity was mapped in the trapezoid bodies of rats, in which the activity is relatively higher than in cats or hamsters. Maps of series of sections through the trapezoid body demonstrated generally higher choline acetyltransferase activity rostrally than caudally, particularly in its portion ventral to the medial part of the spinal trigeminal tract. In the lateral part of the trapezoid body, near the cochlear nucleus, activities tended to be higher in more superficial portions than in deeper portions. Calculation of choline acetyltransferase activity in the total trapezoid body cross-section of a rat with a comprehensive trapezoid body map gave a value 3–4 times that estimated for the centrifugal labyrinthine bundle, which is mostly composed of the olivocochlear bundle, in the same rat. Comparisons with other rats suggest that the ratio may not usually be this high, but it is still consistent with our previous results suggesting that the centrifugal cholinergic innervation of the rat cochlear nucleus reaching it via a trapezoid body route is much higher than that reaching it via branches from the olivocochlear bundle. The higher choline acetyltransferase activity rostrally than caudally in the trapezoid body is consistent with evidence that the centrifugal cholinergic innervation of the cochlear nucleus derives predominantly from locations at or rostral to its anterior part, in the superior olivary complex and pontomesencephalic tegmentum.



from #Audiology via ola Kala on Inoreader https://ift.tt/2LqN164
via IFTTT

Musical and vocal emotion perception for cochlear implants users

Publication date: Available online 25 August 2018

Source: Hearing Research

Author(s): S. Paquette, G.D. Ahmed, M.V. Goffi-Gomez, A.C.H. Hoshino, I. Peretz, A. Lehmann

Abstract

Cochlear implants can successfully restore hearing in profoundly deaf individuals and enable speech comprehension. However, the acoustic signal provided is severely degraded and, as a result, many important acoustic cues for perceiving emotion in voices and music are unavailable. The deficit of cochlear implant users in auditory emotion processing has been clearly established. Yet, the extent to which this deficit and the specific cues that remain available to cochlear implant users are unknown due to several confounding factors.

Here we assessed the recognition of the most basic forms of auditory emotion and aimed to identify which acoustic cues are most relevant to recognize emotions through cochlear implants. To do so, we used stimuli that allowed vocal and musical auditory emotions to be comparatively assessed while controlling for confounding factors. These stimuli were used to evaluate emotion perception in cochlear implant users (Experiment 1) and to investigate emotion perception in natural versus cochlear implant hearing in the same participants with a validated cochlear implant simulation approach (Experiment 2).

Our results showed that vocal and musical fear was not accurately recognized by cochlear implant users. Interestingly, both experiments found that timbral acoustic cues (energy and roughness) correlate with participant ratings for both vocal and musical emotion bursts in the cochlear implant simulation condition. This suggests that specific attention should be given to these cues in the design of cochlear implant processors and rehabilitation protocols (especially energy, and roughness). For instance, music-based interventions focused on timbre could improve emotion perception and regulation, and thus improve social functioning, in children with cochlear implants during development.



from #Audiology via ola Kala on Inoreader https://ift.tt/2w9HnAq
via IFTTT

Recovery of auditory-nerve-fiber spike amplitude under natural excitation conditions

Publication date: Available online 25 August 2018

Source: Hearing Research

Author(s): Adam J. Peterson, Antoine Huet, Jérôme Bourien, Jean-Luc Puel, Peter Heil

Abstract

Knowledge of the refractory properties of auditory-nerve fibers (ANFs) is required for understanding the transduction of the graded membrane potential of the receptor cells into spike trains. The refractory properties inferred when ANFs are excited by electrical stimulation might differ from those present when ANFs are excited naturally by transmitter release from receptor cells. As a proxy for the latter, we investigated the recovery of spike amplitude with time since the previous spike in long extracellular recordings of the activity of individual ANFs from anesthetized Mongolian gerbils. Voltage traces were filtered minimally to avoid distortions of spike amplitude and timing. The amplitude of each spike was defined as the difference between its peak voltage and an extrapolated instantaneous reference voltage at the time of the peak. Spike amplitude was normalized to that of the previous spike to exclude effects of long-term changes in recording conditions. To ensure that the amplitude of the first spike in each pair was fully recovered, each spike pair was used only when preceded by an interspike interval of at least 5 ms. We find that the recovery of spike amplitude is well described by a short dead time followed by a double-exponential recovery function. Total recovery times were short (median: 0.85 ms; interquartile range: 0.74–1.00 ms) and independent of the ANF's characteristic frequency and spontaneous rate, but they increased weakly with increasing mean rate. We emphasize the differences between the recovery of spike amplitude, the recovery of spike probability from postsynaptic refractoriness, and the recovery of spike probability as reflected in the hazard-rate function. Our findings are inconsistent with the long refractory periods assumed in some models, but are consistent with the brief refractoriness assumed in the synapse model of Peterson and Heil (2018), which reproduces the stochastic properties of stationary spontaneous and sound-driven ANF spike trains.

Graphical abstract

Image 1



from #Audiology via ola Kala on Inoreader https://ift.tt/2LrwTBl
via IFTTT

Dynamic response to sound and vibration of the guinea pig utricular macula, measured in vivo using Laser Doppler Vibrometry

Publication date: Available online 24 August 2018

Source: Hearing Research

Author(s): Christopher John Pastras, Ian S. Curthoys, Daniel John Brown

Abstract

With the use of a commercially available Laser Doppler Vibrometer (LDV) we have measured the velocity of the surgically exposed utricular macula in the dorsoventral plane, in anaesthetized guinea pigs, during Air Conducted Sound (ACS) or Bone Conducted Vibration (BCV) stimulation. We have also performed simultaneous measurements of otolithic function in the form of the Utricular Microphonic (UM) and the Vestibular short-latency Evoked Potential (VsEP). Based on the level of macular vibration measured with the LDV, the UM was most sensitive to ACS and BCV between 100 and 200 Hz. The phase of the UM relative to the phase of the macular motion was relatively consistent across frequency for ACS stimulation, but varied by several cycles for BCV stimulation, suggesting a different macromechanical mode of utricular receptor activation. Moreover, unlike ACS, BCV evoked substantially distorted UM and macular vibration responses at certain frequencies, most likely due to complex resonances of the skull. Analogous to LDV studies of organ of Corti vibration, this method provides the means to study the dynamic response of the utricular macula whilst simultaneously measuring function.



from #Audiology via ola Kala on Inoreader https://ift.tt/2o8vTcg
via IFTTT

A review of the effects of unilateral hearing loss on spatial hearing

Publication date: Available online 11 August 2018

Source: Hearing Research

Author(s): Daniel P. Kumpik, Andrew J. King

Abstract

The capacity of the auditory system to extract spatial information relies principally on the detection and interpretation of binaural cues, i.e., differences in the time of arrival or level of the sound between the two ears. In this review, we consider the effects of unilateral or asymmetric hearing loss on spatial hearing, with a focus on the adaptive changes in the brain that may help to compensate for an imbalance in input between the ears. Unilateral hearing loss during development weakens the brain's representation of the deprived ear, and this may outlast the restoration of function in that ear and therefore impair performance on tasks such as sound localization and spatial release from masking that rely on binaural processing. However, loss of hearing in one ear also triggers a reweighting of the cues used for sound localization, resulting in increased dependence on the spectral cues provided by the other ear for localization in azimuth, as well as adjustments in binaural sensitivity that help to offset the imbalance in inputs between the two ears. These adaptive strategies enable the developing auditory system to compensate to a large degree for asymmetric hearing loss, thereby maintaining accurate sound localization. They can also be leveraged by training following hearing loss in adulthood. Although further research is needed to determine whether this plasticity can generalize to more realistic listening conditions and to other tasks, such as spatial unmasking, the capacity of the auditory system to undergo these adaptive changes has important implications for rehabilitation strategies in the hearing impaired.



from #Audiology via ola Kala on Inoreader https://ift.tt/2MlukVW
via IFTTT

Psychophysiological measurement of affective responses during speech perception

Publication date: Available online 10 August 2018

Source: Hearing Research

Author(s): Alexander L. Francis, Jordan Oliver

Abstract

When people make decisions about listening, such as whether to continue attending to a particular conversation or whether to wear their hearing aids to a particular restaurant, they do so on the basis of more than just their estimated performance. Recent research has highlighted the vital role of more subjective qualities such as effort, motivation, and fatigue. Here, we argue that the importance of these factors is largely mediated by a listener's emotional response to the listening challenge, and suggest that emotional responses to communication challenges may provide a crucial link between day-to-day communication stress and long-term health. We start by introducing some basic concepts from the study of emotion and affect. We then develop a conceptual framework to guide future research on this topic through examination of a variety of autonomic and peripheral physiological responses that have been employed to investigate both cognitive and affective phenomena related to challenging communication. We conclude by suggesting the need for further investigation of the links between communication difficulties, emotional response, and long-term health, and make some recommendations intended to guide future research on affective psychophysiology in speech communication.



from #Audiology via ola Kala on Inoreader https://ift.tt/2npuNZ7
via IFTTT

Prolonged low-level noise exposure reduces rat distortion product otoacoustic emissions above a critical level

Publication date: Available online 8 August 2018

Source: Hearing Research

Author(s): Deng-Ling Zhao, Adam Sheppard, Massimo Ralli, Xiaopeng Liu, Richard Salvi

Abstract

Prolonged noise exposures presented at low to moderate intensities are often used to investigate neuroplastic changes in the central auditory pathway. A common assumption in many studies is that central auditory changes occur independent of any hearing loss or cochlear dysfunction. Since hearing loss from a long term noise exposure can only occur if the level of the noise exceeds a critical level, prolonged noise exposures that incrementally increase in intensity can be used to determine the critical level for any given species and noise spectrum. Here we used distortion product otoacoustic emissions (DPOAEs) to determine the critical level in male, inbred Sprague-Dawley rats exposed to a 16–20 kHz noise that increased from 45 to 92 dB SPL in 8 dB increments. DPOAE amplitudes were largely unaffected by noise presented at 60 dB SPL and below. However, DPOAEs within and above the frequency band of the exposures declined rapidly at noise intensities presented at 68 dB SPL and above. The largest and most rapid decline in DPOAE amplitude occurred at 30 kHz, nearly an octave above the 16–20 kHz exposure band. The rate of decline in DPOAE amplitude was 0.54 for every 1 dB increase in noise intensity. Using a linear regression calculation, the estimated critical level for 16–20 kHz noise was remarkably low, approximately 60 dB SPL. These results indicate that long duration, 16–20 kHz noise exposures in the 65–70 dB SPL range likely affect the cochlea and central auditory system of male Sprague-Dawley rats.



from #Audiology via ola Kala on Inoreader https://ift.tt/2M2acJb
via IFTTT

No otoacoustic evidence for a peripheral basis of absolute pitch

Publication date: Available online 8 August 2018

Source: Hearing Research

Author(s): Larissa McKetton, David Purcell, Victoria Stone, Jessica Grahn, Christopher Bergevin

Abstract

Absolute pitch (AP) is the ability to identify the perceived pitch of a sound without an external reference. Relatively rare, with an incidence of approximately 1/10,000, the mechanisms underlying AP are not well understood. This study examined otoacoustic emissions (OAEs) to determine if there is evidence of a peripheral (i.e., cochlear) basis for AP. Two OAE types were examined: spontaneous emissions (SOAEs) and stimulus-frequency emissions (SFOAEs). Our motivations to explore a peripheral foundation for AP were several-fold. First is the observation that pitch judgment accuracy has been reported to decrease with age due to age-dependent physiological changes cochlear biomechanics. Second is the notion that SOAEs, which are indirectly related to perception, could act as a fixed frequency reference. Third, SFOAE delays, which have been demonstrated to serve as a proxy measure for cochlear frequency selectivity, could indicate tuning differences between groups. These led us to the hypotheses that AP subjects would (relative to controls) exhibit a. greater SOAE activity and b. sharper cochlear tuning. To test these notions, measurements were made in normal-hearing control (N=33) and AP-possessor (N=22) populations. In short, no substantial difference in SOAE activity was found between groups, indicating no evidence for one or more strong SOAEs that could act as a fixed cue. SFOAE phase-gradient delays, measured at several different probe levels (20-50 dB SPL), also showed no significant differences between groups. This observation argues against sharper cochlear frequency selectivity in AP subjects. Taken together, these data support the prevailing view that AP mechanisms predominantly arise at a processing level in the central nervous system (CNS) at the brainstem or higher, not within the cochlea.



from #Audiology via ola Kala on Inoreader https://ift.tt/2OinJZV
via IFTTT

Human frequency following responses to iterated rippled noise with positive and negative gain: Differential sensitivity to waveform envelope and temporal fine-structure

Publication date: September 2018

Source: Hearing Research, Volume 367

Author(s): Saradha Ananthakrishnan, Ananthanarayan Krishnan

Abstract

The perceived pitch of iterated rippled noise (IRN) with negative gain (IRNn) is an octave lower than that of IRN with positive gain (IRNp). IRNp and IRNn have identical waveform envelopes (ENV), but differing stimulus waveform fine structure (TFS), which likely accounts for this perceived pitch difference. Here, we examine whether differences in the temporal pattern of phase-locked activity reflected in the human brainstem Frequency Following Response (FFR) elicited by IRNp and IRNn can account for the differences in perceived pitch for the two stimuli. FFRs using a single onset polarity were measured in 13 normal-hearing, adult listeners in response to IRNp and IRNn stimuli with 2 ms, and 4 ms delay. Autocorrelation functions (ACFs) and Fast Fourier Transforms (FFTs) were used to evaluate the dominant periodicity and spectral pattern (harmonic spacing) in the phase-locked FFR neural activity. For both delays, the harmonic spacing in the spectra corresponded more strongly with the perceived lowering of pitch from IRNp to IRNn, compared to the ACFs. These results suggest that the FFR elicited by a single polarity stimulus reflects phase-locking to both stimulus ENV and TFS. A post-hoc experiment evaluating the FFR phase-locked activity to ENV (FFRENV), and TFS (FFRTFS) elicited by IRNp and IRNn confirmed that only the phase-locked activity to the TFS, reflected in FFRTFS, showed differences in both spectra and ACF that closely matched the pitch difference between the two stimuli. The results of the post-hoc experiment suggests that pitch-relevant information is preserved in the temporal pattern of phase-locked activity and suggests that the differences in stimulus ENV and TFS driving the pitch percept of IRNp and IRNn are preserved in the brainstem neural response. The scalp recorded FFR may provide for a noninvasive analytic tool to evaluate the relative contributions of envelope and temporal fine-structure in the neural representation of complex sounds in humans.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Or4mPe
via IFTTT

Detection of single mRNAs in individual cells of the auditory system

Publication date: September 2018

Source: Hearing Research, Volume 367

Author(s): Pezhman Salehi, Charlie N. Nelson, Yingying Chen, Debin Lei, Samuel D. Crish, Jovitha Nelson, Hongyan Zuo, Jianxin Bao

Abstract

Gene expression analysis is essential for understanding the rich repertoire of cellular functions. With the development of sensitive molecular tools such as single-cell RNA sequencing, extensive gene expression data can be obtained and analyzed from various tissues. Single-molecule fluorescence in situ hybridization (smFISH) has emerged as a powerful complementary tool for single-cell genomics studies because of its ability to map and quantify the spatial distributions of single mRNAs at the subcellular level in their native tissue. Here, we present a detailed method to study the copy numbers and spatial localizations of single mRNAs in the cochlea and inferior colliculus. First, we demonstrate that smFISH can be performed successfully in adult cochlear tissue after decalcification. Second, we show that the smFISH signals can be detected with high specificity. Third, we adapt an automated transcript analysis pipeline to quantify and identify single mRNAs in a cell-specific manner. Lastly, we show that our method can be used to study possible correlations between transcriptional and translational activities of single genes. Thus, we have developed a detailed smFISH protocol that can be used to study the expression of single mRNAs in specific cell types of the peripheral and central auditory systems.



from #Audiology via ola Kala on Inoreader https://ift.tt/2K6SrCJ
via IFTTT

Editorial Board

Publication date: September 2018

Source: Hearing Research, Volume 367

Author(s):



from #Audiology via ola Kala on Inoreader https://ift.tt/2OLbqFy
via IFTTT

Editorial introduction: The 6th International Conference on Auditory Cortex

Publication date: September 2018

Source: Hearing Research, Volume 366

Author(s): Blake E. Butler, Yale E. Cohen, Stephen G. Lomber



from #Audiology via ola Kala on Inoreader https://ift.tt/2BjX6lw
via IFTTT

Editorial Board

Publication date: September 2018

Source: Hearing Research, Volume 366

Author(s):



from #Audiology via ola Kala on Inoreader https://ift.tt/2MDHgqE
via IFTTT

Stimulus-specific adaptation in the anesthetized mouse revealed by brainstem auditory evoked potentials

Publication date: Available online 31 August 2018

Source: Hearing Research

Author(s): Daniel Duque, Rui Pais, Manuel S. Malmierca

Abstract

Neural responses to sensory inputs in a complex and natural environment must be weighted according to their relevance. To do so, the brain needs to be able to deal with sudden stimulus fluctuations in an ever-changing acoustic environment. Stimulus-specific adaptation (SSA) is a phenomenon of some neurons along the auditory pathway that show a reduced response to repetitive sounds while responsive to those that occur rarely. SSA has been shown from the inferior colliculus to auditory cortex, but has not been detected in the cochlear nucleus. To discover where SSA is first generated along the auditory pathway, auditory brainstem responses (ABRs) to pure tones were evaluated in anesthetized mice using an oddball paradigm. Using a typical narrow band-pass filter, changes in the ABRs suggest unspecific short-term adaptation may occur as early as the auditory nerve fibers. Furthermore, after applying a wide band-pass filter –allowing the visualization of a late slow wave in the ABR– we found a reduction of the amplitude of the response to repetitive sounds, compared to rare ones, in the slow wave component P0 that follow the fast wave V. Previous studies have shown the P0 shows temporal correlation with the sustained responses of inferior colliculus, thus we suggest that this nucleus is the first to show stimulus specific adaptation in the auditory pathway.

Graphical abstract

Image 1



from #Audiology via ola Kala on Inoreader https://ift.tt/2wuyamA
via IFTTT

Brain-like Emergent Auditory Learning: A Developmental Method

Publication date: Available online 31 August 2018

Source: Hearing Research

Author(s): Dongshu Wang, Hui Shan, Jianbin Xin

Abstract

Compared with machine audition, the human auditory system can recognize speech accurately and quickly. This paper proposes a new developmental network (DN) that simulates the human auditory system and constructs an artificial auditory model for speech recognition. The new model simulates each key element of the human auditory pathway as a deep network; in particular, an additional layer in the network is considered to simulate the function of the superior colliculus in the thalamus for speech context integration. The mel-frequency cepstral coefficient (MFCC) is used to extract the features of the speech signal as the sensory input of the DN. The emergent feature of DN model provides an explanation of how such internal neurons represent the short speech context when they are not supervised by the external world. The experimental results show that the recognition rates of English words and phrases can be improved significantly compared to those reported in the existing literature. The proposed DN model provides a new method to solve difficult problems, such as universal speech recognition, in traditional machine audition systems. Meanwhile, the same learning principle can potentially be used in or adapted to other computational contexts and applications.



from #Audiology via ola Kala on Inoreader https://ift.tt/2NAYNwK
via IFTTT

Development and validation of a method to record electrophysiological responses in direct acoustic cochlear implant subjects

Publication date: Available online 28 August 2018

Source: Hearing Research

Author(s): Hanne Deprez, Robin Gransier, Michael Hofmann, Jan Wouters, Nicolas Verhaert

Abstract

Acoustic hearing implants, such as direct acoustic cochlear implants (DACIs), can be used to treat profound mixed hearing loss. Electrophysiological responses in DACI subjects are of interest to confirm auditory processing intra-operatively, and to assist DACI fitting post-operatively. We present two related studies, focusing on DACI artifacts and electrophysiological measurements in DACI subjects, respectively. In the first study we aimed to characterize DACI artifacts, to study the feasibility of measuring frequency-specific electrophysiological responses in DACI subjects. Measurements of DACI artifacts were collected in a cadaveric head to disentangle possible DACI artifact sources and compared to a constructed DACI artifact template. It is shown that for moderate stimulation levels, DACI artifacts are mainly dominated by the artifact from the radio frequency (RF) communication signal, that can be modeled if the RF encoding protocol is known. In a second study, the feasibility of measuring intra-operative responses, without applying the RF artifact models, in DACI subjects is investigated. Auditory steady-state and brainstem responses were measured intra-operatively in three DACI subjects, immediately after implantation, to confirm proper DACI functioning and coupling to the inner ear. Intra-operative responses could be measured in two of the three tested subjects. Absence of intra-operative responses in the third subject can possibly be explained by the hearing loss, attenuation of intra-operative responses, the difference between electrophysiological and behavioral threshold, and a temporary threshold shift due to the DACI surgery. In conclusion, RF artifacts can be modeled, such that electrophysiological responses to frequency-specific stimuli could possibly be measured in DACI subjects, and intra-operative responses in DACI subjects can be obtained.



from #Audiology via ola Kala on Inoreader https://ift.tt/2BTlmv4
via IFTTT

Improved directional hearing of children with congenital unilateral conductive hearing loss implanted with an active bone-conduction implant or an active middle ear implant

Publication date: Available online 26 August 2018

Source: Hearing Research

Author(s): K. Vogt, H. Frenzel, S.A. Ausili, D. Hollfelder, B. Wollenberg, A.F.M. Snik, M.J.H. Agterberg

Abstract

Different amplification options are available for listeners with congenital unilateral conductive hearing loss (UCHL). For example, bone-conduction devices (BCDs) and middle ear implants. The present study investigated whether intervention with an active BCD, the Bonebridge, or a middle ear implant, the Vibrant Soundbridge (VSB), affected sound-localization performance of listeners with congenital UCHL. Listening with a Bonebridge or VSB might provide access to binaural cues. However, when fitted with the Bonebridge, but not with a VSB, binaural processing might be affected through cross stimulation of the contralateral normal hearing ear, and could interfere with processing of binaural cues. In the present study twenty-three listeners with congenital UCHL were included. To assess processing of binaural cues, we investigated localization abilities of broadband (BB, 0.5–20 kHz) filtered noise presented at varying sound levels. Sound localization abilities were analyzed separately for stimuli presented at the side of the normal-hearing ear, and for stimuli presented at the side of the hearing-impaired ear. Twenty-six normal hearing children and young adults were tested as control listeners. Sound localization abilities were measured under open-loop conditions by recording head-movement responses. We demonstrate improved sound localization abilities of children with congenital UCHL, when listening with a Bonebridge or VSB, predominantly for stimuli presented at the impaired (aided) side. Our results suggest that the improvement is not related to accurate processing of binaural cues. When listening with the Bonebridge, despite cross stimulation of the contralateral cochlea, localization performance was not deteriorated compared to listening with a VSB.



from #Audiology via ola Kala on Inoreader https://ift.tt/2PI6WRu
via IFTTT

Quantitative distribution of choline acetyltransferase activity in rat trapezoid body

Publication date: Available online 25 August 2018

Source: Hearing Research

Author(s): Lauren A. Linker, Lissette Carlson, Donald A. Godfrey, Judy A. Parli, C. David Ross

Abstract

There is evidence for a function of acetylcholine in the cochlear nucleus, primarily in a feedback, modulatory effect on auditory processing. Using a microdissection and quantitative microassay approach, choline acetyltransferase activity was mapped in the trapezoid bodies of rats, in which the activity is relatively higher than in cats or hamsters. Maps of series of sections through the trapezoid body demonstrated generally higher choline acetyltransferase activity rostrally than caudally, particularly in its portion ventral to the medial part of the spinal trigeminal tract. In the lateral part of the trapezoid body, near the cochlear nucleus, activities tended to be higher in more superficial portions than in deeper portions. Calculation of choline acetyltransferase activity in the total trapezoid body cross-section of a rat with a comprehensive trapezoid body map gave a value 3–4 times that estimated for the centrifugal labyrinthine bundle, which is mostly composed of the olivocochlear bundle, in the same rat. Comparisons with other rats suggest that the ratio may not usually be this high, but it is still consistent with our previous results suggesting that the centrifugal cholinergic innervation of the rat cochlear nucleus reaching it via a trapezoid body route is much higher than that reaching it via branches from the olivocochlear bundle. The higher choline acetyltransferase activity rostrally than caudally in the trapezoid body is consistent with evidence that the centrifugal cholinergic innervation of the cochlear nucleus derives predominantly from locations at or rostral to its anterior part, in the superior olivary complex and pontomesencephalic tegmentum.



from #Audiology via ola Kala on Inoreader https://ift.tt/2LqN164
via IFTTT

Musical and vocal emotion perception for cochlear implants users

Publication date: Available online 25 August 2018

Source: Hearing Research

Author(s): S. Paquette, G.D. Ahmed, M.V. Goffi-Gomez, A.C.H. Hoshino, I. Peretz, A. Lehmann

Abstract

Cochlear implants can successfully restore hearing in profoundly deaf individuals and enable speech comprehension. However, the acoustic signal provided is severely degraded and, as a result, many important acoustic cues for perceiving emotion in voices and music are unavailable. The deficit of cochlear implant users in auditory emotion processing has been clearly established. Yet, the extent to which this deficit and the specific cues that remain available to cochlear implant users are unknown due to several confounding factors.

Here we assessed the recognition of the most basic forms of auditory emotion and aimed to identify which acoustic cues are most relevant to recognize emotions through cochlear implants. To do so, we used stimuli that allowed vocal and musical auditory emotions to be comparatively assessed while controlling for confounding factors. These stimuli were used to evaluate emotion perception in cochlear implant users (Experiment 1) and to investigate emotion perception in natural versus cochlear implant hearing in the same participants with a validated cochlear implant simulation approach (Experiment 2).

Our results showed that vocal and musical fear was not accurately recognized by cochlear implant users. Interestingly, both experiments found that timbral acoustic cues (energy and roughness) correlate with participant ratings for both vocal and musical emotion bursts in the cochlear implant simulation condition. This suggests that specific attention should be given to these cues in the design of cochlear implant processors and rehabilitation protocols (especially energy, and roughness). For instance, music-based interventions focused on timbre could improve emotion perception and regulation, and thus improve social functioning, in children with cochlear implants during development.



from #Audiology via ola Kala on Inoreader https://ift.tt/2w9HnAq
via IFTTT

Recovery of auditory-nerve-fiber spike amplitude under natural excitation conditions

Publication date: Available online 25 August 2018

Source: Hearing Research

Author(s): Adam J. Peterson, Antoine Huet, Jérôme Bourien, Jean-Luc Puel, Peter Heil

Abstract

Knowledge of the refractory properties of auditory-nerve fibers (ANFs) is required for understanding the transduction of the graded membrane potential of the receptor cells into spike trains. The refractory properties inferred when ANFs are excited by electrical stimulation might differ from those present when ANFs are excited naturally by transmitter release from receptor cells. As a proxy for the latter, we investigated the recovery of spike amplitude with time since the previous spike in long extracellular recordings of the activity of individual ANFs from anesthetized Mongolian gerbils. Voltage traces were filtered minimally to avoid distortions of spike amplitude and timing. The amplitude of each spike was defined as the difference between its peak voltage and an extrapolated instantaneous reference voltage at the time of the peak. Spike amplitude was normalized to that of the previous spike to exclude effects of long-term changes in recording conditions. To ensure that the amplitude of the first spike in each pair was fully recovered, each spike pair was used only when preceded by an interspike interval of at least 5 ms. We find that the recovery of spike amplitude is well described by a short dead time followed by a double-exponential recovery function. Total recovery times were short (median: 0.85 ms; interquartile range: 0.74–1.00 ms) and independent of the ANF's characteristic frequency and spontaneous rate, but they increased weakly with increasing mean rate. We emphasize the differences between the recovery of spike amplitude, the recovery of spike probability from postsynaptic refractoriness, and the recovery of spike probability as reflected in the hazard-rate function. Our findings are inconsistent with the long refractory periods assumed in some models, but are consistent with the brief refractoriness assumed in the synapse model of Peterson and Heil (2018), which reproduces the stochastic properties of stationary spontaneous and sound-driven ANF spike trains.

Graphical abstract

Image 1



from #Audiology via ola Kala on Inoreader https://ift.tt/2LrwTBl
via IFTTT

Dynamic response to sound and vibration of the guinea pig utricular macula, measured in vivo using Laser Doppler Vibrometry

Publication date: Available online 24 August 2018

Source: Hearing Research

Author(s): Christopher John Pastras, Ian S. Curthoys, Daniel John Brown

Abstract

With the use of a commercially available Laser Doppler Vibrometer (LDV) we have measured the velocity of the surgically exposed utricular macula in the dorsoventral plane, in anaesthetized guinea pigs, during Air Conducted Sound (ACS) or Bone Conducted Vibration (BCV) stimulation. We have also performed simultaneous measurements of otolithic function in the form of the Utricular Microphonic (UM) and the Vestibular short-latency Evoked Potential (VsEP). Based on the level of macular vibration measured with the LDV, the UM was most sensitive to ACS and BCV between 100 and 200 Hz. The phase of the UM relative to the phase of the macular motion was relatively consistent across frequency for ACS stimulation, but varied by several cycles for BCV stimulation, suggesting a different macromechanical mode of utricular receptor activation. Moreover, unlike ACS, BCV evoked substantially distorted UM and macular vibration responses at certain frequencies, most likely due to complex resonances of the skull. Analogous to LDV studies of organ of Corti vibration, this method provides the means to study the dynamic response of the utricular macula whilst simultaneously measuring function.



from #Audiology via ola Kala on Inoreader https://ift.tt/2o8vTcg
via IFTTT

A review of the effects of unilateral hearing loss on spatial hearing

Publication date: Available online 11 August 2018

Source: Hearing Research

Author(s): Daniel P. Kumpik, Andrew J. King

Abstract

The capacity of the auditory system to extract spatial information relies principally on the detection and interpretation of binaural cues, i.e., differences in the time of arrival or level of the sound between the two ears. In this review, we consider the effects of unilateral or asymmetric hearing loss on spatial hearing, with a focus on the adaptive changes in the brain that may help to compensate for an imbalance in input between the ears. Unilateral hearing loss during development weakens the brain's representation of the deprived ear, and this may outlast the restoration of function in that ear and therefore impair performance on tasks such as sound localization and spatial release from masking that rely on binaural processing. However, loss of hearing in one ear also triggers a reweighting of the cues used for sound localization, resulting in increased dependence on the spectral cues provided by the other ear for localization in azimuth, as well as adjustments in binaural sensitivity that help to offset the imbalance in inputs between the two ears. These adaptive strategies enable the developing auditory system to compensate to a large degree for asymmetric hearing loss, thereby maintaining accurate sound localization. They can also be leveraged by training following hearing loss in adulthood. Although further research is needed to determine whether this plasticity can generalize to more realistic listening conditions and to other tasks, such as spatial unmasking, the capacity of the auditory system to undergo these adaptive changes has important implications for rehabilitation strategies in the hearing impaired.



from #Audiology via ola Kala on Inoreader https://ift.tt/2MlukVW
via IFTTT

Psychophysiological measurement of affective responses during speech perception

Publication date: Available online 10 August 2018

Source: Hearing Research

Author(s): Alexander L. Francis, Jordan Oliver

Abstract

When people make decisions about listening, such as whether to continue attending to a particular conversation or whether to wear their hearing aids to a particular restaurant, they do so on the basis of more than just their estimated performance. Recent research has highlighted the vital role of more subjective qualities such as effort, motivation, and fatigue. Here, we argue that the importance of these factors is largely mediated by a listener's emotional response to the listening challenge, and suggest that emotional responses to communication challenges may provide a crucial link between day-to-day communication stress and long-term health. We start by introducing some basic concepts from the study of emotion and affect. We then develop a conceptual framework to guide future research on this topic through examination of a variety of autonomic and peripheral physiological responses that have been employed to investigate both cognitive and affective phenomena related to challenging communication. We conclude by suggesting the need for further investigation of the links between communication difficulties, emotional response, and long-term health, and make some recommendations intended to guide future research on affective psychophysiology in speech communication.



from #Audiology via ola Kala on Inoreader https://ift.tt/2npuNZ7
via IFTTT

Prolonged low-level noise exposure reduces rat distortion product otoacoustic emissions above a critical level

Publication date: Available online 8 August 2018

Source: Hearing Research

Author(s): Deng-Ling Zhao, Adam Sheppard, Massimo Ralli, Xiaopeng Liu, Richard Salvi

Abstract

Prolonged noise exposures presented at low to moderate intensities are often used to investigate neuroplastic changes in the central auditory pathway. A common assumption in many studies is that central auditory changes occur independent of any hearing loss or cochlear dysfunction. Since hearing loss from a long term noise exposure can only occur if the level of the noise exceeds a critical level, prolonged noise exposures that incrementally increase in intensity can be used to determine the critical level for any given species and noise spectrum. Here we used distortion product otoacoustic emissions (DPOAEs) to determine the critical level in male, inbred Sprague-Dawley rats exposed to a 16–20 kHz noise that increased from 45 to 92 dB SPL in 8 dB increments. DPOAE amplitudes were largely unaffected by noise presented at 60 dB SPL and below. However, DPOAEs within and above the frequency band of the exposures declined rapidly at noise intensities presented at 68 dB SPL and above. The largest and most rapid decline in DPOAE amplitude occurred at 30 kHz, nearly an octave above the 16–20 kHz exposure band. The rate of decline in DPOAE amplitude was 0.54 for every 1 dB increase in noise intensity. Using a linear regression calculation, the estimated critical level for 16–20 kHz noise was remarkably low, approximately 60 dB SPL. These results indicate that long duration, 16–20 kHz noise exposures in the 65–70 dB SPL range likely affect the cochlea and central auditory system of male Sprague-Dawley rats.



from #Audiology via ola Kala on Inoreader https://ift.tt/2M2acJb
via IFTTT

No otoacoustic evidence for a peripheral basis of absolute pitch

Publication date: Available online 8 August 2018

Source: Hearing Research

Author(s): Larissa McKetton, David Purcell, Victoria Stone, Jessica Grahn, Christopher Bergevin

Abstract

Absolute pitch (AP) is the ability to identify the perceived pitch of a sound without an external reference. Relatively rare, with an incidence of approximately 1/10,000, the mechanisms underlying AP are not well understood. This study examined otoacoustic emissions (OAEs) to determine if there is evidence of a peripheral (i.e., cochlear) basis for AP. Two OAE types were examined: spontaneous emissions (SOAEs) and stimulus-frequency emissions (SFOAEs). Our motivations to explore a peripheral foundation for AP were several-fold. First is the observation that pitch judgment accuracy has been reported to decrease with age due to age-dependent physiological changes cochlear biomechanics. Second is the notion that SOAEs, which are indirectly related to perception, could act as a fixed frequency reference. Third, SFOAE delays, which have been demonstrated to serve as a proxy measure for cochlear frequency selectivity, could indicate tuning differences between groups. These led us to the hypotheses that AP subjects would (relative to controls) exhibit a. greater SOAE activity and b. sharper cochlear tuning. To test these notions, measurements were made in normal-hearing control (N=33) and AP-possessor (N=22) populations. In short, no substantial difference in SOAE activity was found between groups, indicating no evidence for one or more strong SOAEs that could act as a fixed cue. SFOAE phase-gradient delays, measured at several different probe levels (20-50 dB SPL), also showed no significant differences between groups. This observation argues against sharper cochlear frequency selectivity in AP subjects. Taken together, these data support the prevailing view that AP mechanisms predominantly arise at a processing level in the central nervous system (CNS) at the brainstem or higher, not within the cochlea.



from #Audiology via ola Kala on Inoreader https://ift.tt/2OinJZV
via IFTTT

Changes in postural sway and gait characteristics as a consequence of anterior load carriage

Publication date: Available online 31 August 2018

Source: Gait & Posture

Author(s): Matthew Roberts, Christopher Talbot, Anthony Kay, Michael Price, Mathew Hill



from #Audiology via ola Kala on Inoreader https://ift.tt/2PphzaQ
via IFTTT

Gait Analysis of Patients with Continuous Proximal Sciatic Nerve Blockade in Flexion Contractures after Primary Total Knee Arthroplasty

Publication date: Available online 31 August 2018

Source: Gait & Posture

Author(s): Meng Zhou, Shuai An, Mingli Feng, Zheng Li, Huiliang Shen, Kuan Zhang, Jun Sun, Guanglei Cao

Abstract
Background

The main objective of total knee arthroplasty is to relieve pain, restore normal knee function, and improve gait stability. Significant flexion contractures can severely impair function after surgery. The purpose of this study is to evaluate the efficacy of implementing a continuous proximal sciatic nerve block in conjunction with aggressive physical therapy to treat patients with persistent flexion contractures that were recalcitrant to rehabilitation efforts following primary total knee arthroplasty (TKA).

Methods

From December 2012 to January 2016, the following subjects were enrolled in this study: 20 patients (15 females and 5 males aged between 62 and 78 years old; median age = 65.7 y) with flexion contractures ranging from 15° to 25° (19.2°±5.6°) that persisted for at least 1.5 months following total knee arthroplasty and showed no significant improvement in response to conventional therapeutic methods. Demographic data, the passive range of motion, flexion contracture, pain score during stretching, and the Hospital for Special Surgery knee score were recorded. A portable motion analyzer was used to obtain the corresponding gait parameters from the flexion contractures group and control group. Repeated measurement ANOVA was used to compare the clinical results.

Results

In combination with 2 to 4 (2.5 ± 1.3) months of aggressive knee stretching exercises, 16 out of 18 knees achieved full extension, and 2 out of 18 improved to within 5° of the full extension. An average of the 12 to 48 (26.6 ± 10.7) month follow-up showed that this improved range of motion was maintained for all the corresponding patients, and that there were no reoccurrences of deformity. The mean Hospital for Special Surgery knee scores improved from 61.2 to 93.2 points (p < 0.001). After six months of continuous proximal sciatic nerve blockage, all gait parameters for the flexion contractures group exhibited significant improvement.

Conclusion

A continuous proximal sciatic nerve block could be a useful adjunct to a physical therapy regimen for patients with knee flexion contractures, especially for patients with difficult-to-treat cases of knee flexion contracture that are recalcitrant to conservative therapy.



from #Audiology via ola Kala on Inoreader https://ift.tt/2N9uqAv
via IFTTT

Changes in postural sway and gait characteristics as a consequence of anterior load carriage

Publication date: Available online 31 August 2018

Source: Gait & Posture

Author(s): Matthew Roberts, Christopher Talbot, Anthony Kay, Michael Price, Mathew Hill



from #Audiology via ola Kala on Inoreader https://ift.tt/2PphzaQ
via IFTTT

Gait Analysis of Patients with Continuous Proximal Sciatic Nerve Blockade in Flexion Contractures after Primary Total Knee Arthroplasty

Publication date: Available online 31 August 2018

Source: Gait & Posture

Author(s): Meng Zhou, Shuai An, Mingli Feng, Zheng Li, Huiliang Shen, Kuan Zhang, Jun Sun, Guanglei Cao

Abstract
Background

The main objective of total knee arthroplasty is to relieve pain, restore normal knee function, and improve gait stability. Significant flexion contractures can severely impair function after surgery. The purpose of this study is to evaluate the efficacy of implementing a continuous proximal sciatic nerve block in conjunction with aggressive physical therapy to treat patients with persistent flexion contractures that were recalcitrant to rehabilitation efforts following primary total knee arthroplasty (TKA).

Methods

From December 2012 to January 2016, the following subjects were enrolled in this study: 20 patients (15 females and 5 males aged between 62 and 78 years old; median age = 65.7 y) with flexion contractures ranging from 15° to 25° (19.2°±5.6°) that persisted for at least 1.5 months following total knee arthroplasty and showed no significant improvement in response to conventional therapeutic methods. Demographic data, the passive range of motion, flexion contracture, pain score during stretching, and the Hospital for Special Surgery knee score were recorded. A portable motion analyzer was used to obtain the corresponding gait parameters from the flexion contractures group and control group. Repeated measurement ANOVA was used to compare the clinical results.

Results

In combination with 2 to 4 (2.5 ± 1.3) months of aggressive knee stretching exercises, 16 out of 18 knees achieved full extension, and 2 out of 18 improved to within 5° of the full extension. An average of the 12 to 48 (26.6 ± 10.7) month follow-up showed that this improved range of motion was maintained for all the corresponding patients, and that there were no reoccurrences of deformity. The mean Hospital for Special Surgery knee scores improved from 61.2 to 93.2 points (p < 0.001). After six months of continuous proximal sciatic nerve blockage, all gait parameters for the flexion contractures group exhibited significant improvement.

Conclusion

A continuous proximal sciatic nerve block could be a useful adjunct to a physical therapy regimen for patients with knee flexion contractures, especially for patients with difficult-to-treat cases of knee flexion contracture that are recalcitrant to conservative therapy.



from #Audiology via ola Kala on Inoreader https://ift.tt/2N9uqAv
via IFTTT

Stimulus-specific adaptation in the anesthetized mouse revealed by brainstem auditory evoked potentials

Publication date: Available online 31 August 2018

Source: Hearing Research

Author(s): Daniel Duque, Rui Pais, Manuel S. Malmierca

Abstract

Neural responses to sensory inputs in a complex and natural environment must be weighted according to their relevance. To do so, the brain needs to be able to deal with sudden stimulus fluctuations in an ever-changing acoustic environment. Stimulus-specific adaptation (SSA) is a phenomenon of some neurons along the auditory pathway that show a reduced response to repetitive sounds while responsive to those that occur rarely. SSA has been shown from the inferior colliculus to auditory cortex, but has not been detected in the cochlear nucleus. To discover where SSA is first generated along the auditory pathway, auditory brainstem responses (ABRs) to pure tones were evaluated in anesthetized mice using an oddball paradigm. Using a typical narrow band-pass filter, changes in the ABRs suggest unspecific short-term adaptation may occur as early as the auditory nerve fibers. Furthermore, after applying a wide band-pass filter –allowing the visualization of a late slow wave in the ABR– we found a reduction of the amplitude of the response to repetitive sounds, compared to rare ones, in the slow wave component P0 that follow the fast wave V. Previous studies have shown the P0 shows temporal correlation with the sustained responses of inferior colliculus, thus we suggest that this nucleus is the first to show stimulus specific adaptation in the auditory pathway.

Graphical abstract

Image 1



from #Audiology via ola Kala on Inoreader https://ift.tt/2wuyamA
via IFTTT

Stimulus-specific adaptation in the anesthetized mouse revealed by brainstem auditory evoked potentials

Publication date: Available online 31 August 2018

Source: Hearing Research

Author(s): Daniel Duque, Rui Pais, Manuel S. Malmierca

Abstract

Neural responses to sensory inputs in a complex and natural environment must be weighted according to their relevance. To do so, the brain needs to be able to deal with sudden stimulus fluctuations in an ever-changing acoustic environment. Stimulus-specific adaptation (SSA) is a phenomenon of some neurons along the auditory pathway that show a reduced response to repetitive sounds while responsive to those that occur rarely. SSA has been shown from the inferior colliculus to auditory cortex, but has not been detected in the cochlear nucleus. To discover where SSA is first generated along the auditory pathway, auditory brainstem responses (ABRs) to pure tones were evaluated in anesthetized mice using an oddball paradigm. Using a typical narrow band-pass filter, changes in the ABRs suggest unspecific short-term adaptation may occur as early as the auditory nerve fibers. Furthermore, after applying a wide band-pass filter –allowing the visualization of a late slow wave in the ABR– we found a reduction of the amplitude of the response to repetitive sounds, compared to rare ones, in the slow wave component P0 that follow the fast wave V. Previous studies have shown the P0 shows temporal correlation with the sustained responses of inferior colliculus, thus we suggest that this nucleus is the first to show stimulus specific adaptation in the auditory pathway.

Graphical abstract

Image 1



from #Audiology via ola Kala on Inoreader https://ift.tt/2wuyamA
via IFTTT

Residual Cochlear Function in Adults and Children Receiving Cochlear Implants: Correlations With Speech Perception Outcomes

Objectives: Variability in speech perception outcomes with cochlear implants remains largely unexplained. Recently, electrocochleography, or measurements of cochlear potentials in response to sound, has been used to assess residual cochlear function at the time of implantation. Our objective was to characterize the potentials recorded preimplantation in subjects of all ages, and evaluate the relationship between the responses, including a subjective estimate of neural activity, and speech perception outcomes. Design: Electrocochleography was recorded in a prospective cohort of 284 candidates for cochlear implant at University of North Carolina (10 months to 88 years of ages). Measurement of residual cochlear function called the “total response” (TR), which is the sum of magnitudes of spectral components in response to tones of different stimulus frequencies, was obtained for each subject. The TR was then related to results on age-appropriate monosyllabic word score tests presented in quiet. In addition to the TR, the electrocochleography results were also assessed for neural activity in the forms of the compound action potential and auditory nerve neurophonic. Results: The TR magnitude ranged from a barely detectable response of about 0.02 µV to more than 100 µV. In adults (18 to 79 years old), the TR accounted for 46% of variability in speech perception outcome by linear regression (r2 = 0.46; p

from #Audiology via ola Kala on Inoreader https://ift.tt/2N51ycC
via IFTTT

Residual Cochlear Function in Adults and Children Receiving Cochlear Implants: Correlations With Speech Perception Outcomes

Objectives: Variability in speech perception outcomes with cochlear implants remains largely unexplained. Recently, electrocochleography, or measurements of cochlear potentials in response to sound, has been used to assess residual cochlear function at the time of implantation. Our objective was to characterize the potentials recorded preimplantation in subjects of all ages, and evaluate the relationship between the responses, including a subjective estimate of neural activity, and speech perception outcomes. Design: Electrocochleography was recorded in a prospective cohort of 284 candidates for cochlear implant at University of North Carolina (10 months to 88 years of ages). Measurement of residual cochlear function called the “total response” (TR), which is the sum of magnitudes of spectral components in response to tones of different stimulus frequencies, was obtained for each subject. The TR was then related to results on age-appropriate monosyllabic word score tests presented in quiet. In addition to the TR, the electrocochleography results were also assessed for neural activity in the forms of the compound action potential and auditory nerve neurophonic. Results: The TR magnitude ranged from a barely detectable response of about 0.02 µV to more than 100 µV. In adults (18 to 79 years old), the TR accounted for 46% of variability in speech perception outcome by linear regression (r2 = 0.46; p

from #Audiology via ola Kala on Inoreader https://ift.tt/2N51ycC
via IFTTT

The World’s First ‘Healthable’ Hearing Aid

​Starkey (https://www.starkey.com/) has introduced Livio AI, a "Healthable" hearing aid that not only tracks physical activity and cognitive health of the user but also features the company's latest sound technology. The 3D motion sensors inside Livio AI allow the hearing aids to detect movement, track activities, and recognize gestures. The hearing aids then communicate with each other and mobile accessories to deliver real-time feedback about the user's overall body and cognitive health and fitness in scores through the companion Thrive Hearing app. Livio AI also comes with the new Hearing Reality technology, which provides an average 50 percent reduction in noisy environments, significant reduced listening effort, and newly enhanced clarity of speech, while the use of artificial intelligence and integrated sensors enabled it to optimize the hearing experience. Livio AI is available as a RIC 312 and BTE 13 and is the first hearing aid to feature Amazon Alexa connectivity. Livio AI is currently available in the United States and Canada, with a global rollout to more than 20 countries in 2019.​

Published: 8/31/2018 8:09:00 AM


from #Audiology via ola Kala on Inoreader https://ift.tt/2wuoSHq
via IFTTT

The World’s First ‘Healthable’ Hearing Aid

​Starkey (https://www.starkey.com/) has introduced Livio AI, a "Healthable" hearing aid that not only tracks physical activity and cognitive health of the user but also features the company's latest sound technology. The 3D motion sensors inside Livio AI allow the hearing aids to detect movement, track activities, and recognize gestures. The hearing aids then communicate with each other and mobile accessories to deliver real-time feedback about the user's overall body and cognitive health and fitness in scores through the companion Thrive Hearing app. Livio AI also comes with the new Hearing Reality technology, which provides an average 50 percent reduction in noisy environments, significant reduced listening effort, and newly enhanced clarity of speech, while the use of artificial intelligence and integrated sensors enabled it to optimize the hearing experience. Livio AI is available as a RIC 312 and BTE 13 and is the first hearing aid to feature Amazon Alexa connectivity. Livio AI is currently available in the United States and Canada, with a global rollout to more than 20 countries in 2019.​

Published: 8/31/2018 8:09:00 AM


from #Audiology via ola Kala on Inoreader https://ift.tt/2wuoSHq
via IFTTT