Πέμπτη 6 Οκτωβρίου 2016

Health-Related Quality of Life in Korean Adults with Hearing Impairment: The Korea National Health and Nutrition Examination Survey 2010 to 2012

by Min Kwan Baek, Young Saing Kim, Eun Young Kim, Ae Jin Kim, Won-Jun Choi

Background

As the global population ages, disabling hearing impairment (HI) have been increased rapidly. The impact of HI on health-related quality of life (HRQoL) is of great importance to aid the development of strategic plans and to guide therapeutic interventions.

Purpose

To evaluate HRQoL in Korean adults with different degrees of HI using EuroQol five-dimensional (EQ-5D) and EQ-visual analogue scale (VAS), the preference-based generic measures of HRQoL.

Methods

Using a representative dataset from the Korea National Health and Nutrition Examination Survey (KNHANES) from January 2010 to December 2012, EQ-5D questionnaire and EQ- VAS scores of subjects with HI were compared with those of subjects without HI. Logistic regression analysis, with adjustment for covariates, was used to evaluate the impact of HI on HRQoL scales. HI was defined according to the hearing thresholds of pure-tone averages at 0.5, 1, 2, and 3 kHz of the better hearing ear as follows; mild HI (26 to Results

Of the 16,449 Korean adults in KNHANES (age, 45.0 ± 0.2 years; male, 49.7%), 1757 (weighted prevalence, 7.6%) had mild HI and 890 (3.6%) had moderate to severe HI. Subjects with HI had impaired HRQoL as compared with subjects without HI (EQ-5D, 0.96 ± 0.00 vs. 0.88±0.00 vs. 0.86 ± 0.01 for control vs. mild HI vs. moderate to severe HI, p p p = 0.004), but EQ-5D impairment disappeared (0.86 ± 0.02 vs.0.88±0.01 for moderate to severe HI vs. control, p = 0.058).

Conclusion

After adjusting for socio-demographic and psychosocial factors and comorbidities, Korean adults with moderate to severe HI rated their health statuses lower than subjects without HI.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2dvW0bt
via IFTTT

Rapid Release From Listening Effort Resulting From Semantic Context, and Effects of Spectral Degradation and Cochlear Implants

People with hearing impairment are thought to rely heavily on context to compensate for reduced audibility. Here, we explore the resulting cost of this compensatory behavior, in terms of effort and the efficiency of ongoing predictive language processing. The listening task featured predictable or unpredictable sentences, and participants included people with cochlear implants as well as people with normal hearing who heard full-spectrum/unprocessed or vocoded speech. The crucial metric was the growth of the pupillary response and the reduction of this response for predictable versus unpredictable sentences, which would suggest reduced cognitive load resulting from predictive processing. Semantic context led to rapid reduction of listening effort for people with normal hearing; the reductions were observed well before the offset of the stimuli. Effort reduction was slightly delayed for people with cochlear implants and considerably more delayed for normal-hearing listeners exposed to spectrally degraded noise-vocoded signals; this pattern of results was maintained even when intelligibility was perfect. Results suggest that speed of sentence processing can still be disrupted, and exertion of effort can be elevated, even when intelligibility remains high. We discuss implications for experimental and clinical assessment of speech recognition, in which good performance can arise because of cognitive processes that occur after a stimulus, during a period of silence. Because silent gaps are not common in continuous flowing speech, the cognitive/linguistic restorative processes observed after sentences in such studies might not be available to listeners in everyday conversations, meaning that speech recognition in conventional tests might overestimate sentence-processing capability.



from #Audiology via ola Kala on Inoreader http://ift.tt/2dPwgXl
via IFTTT

Time-Varying Distortions of Binaural Information by Bilateral Hearing Aids: Effects of Nonlinear Frequency Compression

In patients with bilateral hearing loss, the use of two hearing aids (HAs) offers the potential to restore the benefits of binaural hearing, including sound source localization and segregation. However, existing evidence suggests that bilateral HA users’ access to binaural information, namely interaural time and level differences (ITDs and ILDs), can be compromised by device processing. Our objective was to characterize the nature and magnitude of binaural distortions caused by modern digital behind-the-ear HAs using a variety of stimuli and HA program settings. Of particular interest was a common frequency-lowering algorithm known as nonlinear frequency compression, which has not previously been assessed for its effects on binaural information. A binaural beamforming algorithm was also assessed. Wide dynamic range compression was enabled in all programs. HAs were placed on a binaural manikin, and stimuli were presented from an arc of loudspeakers inside an anechoic chamber. Stimuli were broadband noise bursts, 10-Hz sinusoidally amplitude-modulated noise bursts, or consonant–vowel–consonant speech tokens. Binaural information was analyzed in terms of ITDs, ILDs, and interaural coherence, both for whole stimuli and in a time-varying sense (i.e., within a running temporal window) across four different frequency bands (1, 2, 4, and 6 kHz). Key findings were: (a) Nonlinear frequency compression caused distortions of high-frequency envelope ITDs and significantly reduced interaural coherence. (b) For modulated stimuli, all programs caused time-varying distortion of ILDs. (c) HAs altered the relationship between ITDs and ILDs, introducing large ITD–ILD conflicts in some cases. Potential perceptual consequences of measured distortions are discussed.



from #Audiology via ola Kala on Inoreader http://ift.tt/2cXcdQv
via IFTTT

A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments

Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model.



from #Audiology via ola Kala on Inoreader http://ift.tt/2dPxp14
via IFTTT

Subjective Listening Effort and Electrodermal Activity in Listening Situations with Reverberation and Noise

Disturbing factors like reverberation or ambient noise can impair speech recognition and raise the listening effort needed for successful communication in daily life. Situations with high listening effort are thought to result in increased stress for the listener. The aim of this study was to explore possible measures to determine listening effort in situations with varying background noise and reverberation. For this purpose, subjective ratings of listening effort, speech recognition, and stress level, together with the electrodermal activity as a measure of the autonomic stress reaction, were investigated. It was expected that the electrodermal activity would show different stress levels in different acoustic situations and might serve as an alternative to subjective ratings. Ten young normal-hearing and 17 elderly hearing-impaired subjects listened to sentences from the Oldenburg sentence test either with stationary background noise or with reverberation. Four listening situations were generated, an easy and a hard one for each of the two disturbing factors, which were related to each other by the Speech Transmission Index. The easy situation resulted in 100% and the hard situation resulted in 30 to 80% speech recognition. The results of the subjective ratings showed significant differences between the easy and the hard listening situations in both subject groups. Two methods of analyzing the electrodermal activity values revealed similar, but nonsignificant trends. Significant correlations between subjective ratings and physiological electrodermal activity data were observed for normal-hearing subjects in the noise situation.



from #Audiology via ola Kala on Inoreader http://ift.tt/2cXbhLU
via IFTTT

The Prediction of Speech Recognition in Noise With a Semi-Implantable Bone Conduction Hearing System by External Bone Conduction Stimulation With Headband: A Prospective Study

Semi-implantable transcutaneous bone conduction devices are treatment options for conductive and mixed hearing loss (CHL/MHL). For counseling of patients, realistic simulation of the functional result is desirable. This study compared speech recognition in noise with a semi-implantable transcutaneous bone conduction device to external stimulation with a bone conduction device fixed by a headband. Eight German-language adult patients were enrolled after a semi-implantable transcutaneous bone conduction device (Bonebridge, Med-El) was implanted and fitted. Patients received a bone conduction device for external stimulation (Baha BP110, Cochlear) fixed by a headband for comparison. The main outcome measure was speech recognition in noise (Oldenburg Sentence Test). Pure-tone audiometry was performed and subjective benefit was assessed using the Glasgow Benefit Inventory and Abbreviated Profile of Hearing Aid Benefit questionnaires. Unaided, patients showed a mean signal-to-noise ratio threshold of 4.6 ± 4.2 dB S/N for speech recognition. The aided results were –3.3 ± 7.2 dB S/N by external bone conduction stimulation and –1.2 ± 4.0 dB S/N by the semi-implantable bone conduction device. The difference between the two devices was not statistically significant, while the difference was significant between unaided and aided situation for both devices. Both questionnaires for subjective benefit favored the semi-implantable device over external stimulation. We conclude that it is possible to simulate the result of speech recognition in noise with a semi-implantable transcutaneous bone conduction device by external stimulation. This should be part of preoperative counseling of patients with CHL/MHL before implantation of a bone conduction device.



from #Audiology via ola Kala on Inoreader http://ift.tt/2dPwMod
via IFTTT

Rapid Release From Listening Effort Resulting From Semantic Context, and Effects of Spectral Degradation and Cochlear Implants

People with hearing impairment are thought to rely heavily on context to compensate for reduced audibility. Here, we explore the resulting cost of this compensatory behavior, in terms of effort and the efficiency of ongoing predictive language processing. The listening task featured predictable or unpredictable sentences, and participants included people with cochlear implants as well as people with normal hearing who heard full-spectrum/unprocessed or vocoded speech. The crucial metric was the growth of the pupillary response and the reduction of this response for predictable versus unpredictable sentences, which would suggest reduced cognitive load resulting from predictive processing. Semantic context led to rapid reduction of listening effort for people with normal hearing; the reductions were observed well before the offset of the stimuli. Effort reduction was slightly delayed for people with cochlear implants and considerably more delayed for normal-hearing listeners exposed to spectrally degraded noise-vocoded signals; this pattern of results was maintained even when intelligibility was perfect. Results suggest that speed of sentence processing can still be disrupted, and exertion of effort can be elevated, even when intelligibility remains high. We discuss implications for experimental and clinical assessment of speech recognition, in which good performance can arise because of cognitive processes that occur after a stimulus, during a period of silence. Because silent gaps are not common in continuous flowing speech, the cognitive/linguistic restorative processes observed after sentences in such studies might not be available to listeners in everyday conversations, meaning that speech recognition in conventional tests might overestimate sentence-processing capability.



from #Audiology via ola Kala on Inoreader http://ift.tt/2dPwgXl
via IFTTT

Time-Varying Distortions of Binaural Information by Bilateral Hearing Aids: Effects of Nonlinear Frequency Compression

In patients with bilateral hearing loss, the use of two hearing aids (HAs) offers the potential to restore the benefits of binaural hearing, including sound source localization and segregation. However, existing evidence suggests that bilateral HA users’ access to binaural information, namely interaural time and level differences (ITDs and ILDs), can be compromised by device processing. Our objective was to characterize the nature and magnitude of binaural distortions caused by modern digital behind-the-ear HAs using a variety of stimuli and HA program settings. Of particular interest was a common frequency-lowering algorithm known as nonlinear frequency compression, which has not previously been assessed for its effects on binaural information. A binaural beamforming algorithm was also assessed. Wide dynamic range compression was enabled in all programs. HAs were placed on a binaural manikin, and stimuli were presented from an arc of loudspeakers inside an anechoic chamber. Stimuli were broadband noise bursts, 10-Hz sinusoidally amplitude-modulated noise bursts, or consonant–vowel–consonant speech tokens. Binaural information was analyzed in terms of ITDs, ILDs, and interaural coherence, both for whole stimuli and in a time-varying sense (i.e., within a running temporal window) across four different frequency bands (1, 2, 4, and 6 kHz). Key findings were: (a) Nonlinear frequency compression caused distortions of high-frequency envelope ITDs and significantly reduced interaural coherence. (b) For modulated stimuli, all programs caused time-varying distortion of ILDs. (c) HAs altered the relationship between ITDs and ILDs, introducing large ITD–ILD conflicts in some cases. Potential perceptual consequences of measured distortions are discussed.



from #Audiology via ola Kala on Inoreader http://ift.tt/2cXcdQv
via IFTTT

A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments

Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model.



from #Audiology via ola Kala on Inoreader http://ift.tt/2dPxp14
via IFTTT

Subjective Listening Effort and Electrodermal Activity in Listening Situations with Reverberation and Noise

Disturbing factors like reverberation or ambient noise can impair speech recognition and raise the listening effort needed for successful communication in daily life. Situations with high listening effort are thought to result in increased stress for the listener. The aim of this study was to explore possible measures to determine listening effort in situations with varying background noise and reverberation. For this purpose, subjective ratings of listening effort, speech recognition, and stress level, together with the electrodermal activity as a measure of the autonomic stress reaction, were investigated. It was expected that the electrodermal activity would show different stress levels in different acoustic situations and might serve as an alternative to subjective ratings. Ten young normal-hearing and 17 elderly hearing-impaired subjects listened to sentences from the Oldenburg sentence test either with stationary background noise or with reverberation. Four listening situations were generated, an easy and a hard one for each of the two disturbing factors, which were related to each other by the Speech Transmission Index. The easy situation resulted in 100% and the hard situation resulted in 30 to 80% speech recognition. The results of the subjective ratings showed significant differences between the easy and the hard listening situations in both subject groups. Two methods of analyzing the electrodermal activity values revealed similar, but nonsignificant trends. Significant correlations between subjective ratings and physiological electrodermal activity data were observed for normal-hearing subjects in the noise situation.



from #Audiology via ola Kala on Inoreader http://ift.tt/2cXbhLU
via IFTTT

The Prediction of Speech Recognition in Noise With a Semi-Implantable Bone Conduction Hearing System by External Bone Conduction Stimulation With Headband: A Prospective Study

Semi-implantable transcutaneous bone conduction devices are treatment options for conductive and mixed hearing loss (CHL/MHL). For counseling of patients, realistic simulation of the functional result is desirable. This study compared speech recognition in noise with a semi-implantable transcutaneous bone conduction device to external stimulation with a bone conduction device fixed by a headband. Eight German-language adult patients were enrolled after a semi-implantable transcutaneous bone conduction device (Bonebridge, Med-El) was implanted and fitted. Patients received a bone conduction device for external stimulation (Baha BP110, Cochlear) fixed by a headband for comparison. The main outcome measure was speech recognition in noise (Oldenburg Sentence Test). Pure-tone audiometry was performed and subjective benefit was assessed using the Glasgow Benefit Inventory and Abbreviated Profile of Hearing Aid Benefit questionnaires. Unaided, patients showed a mean signal-to-noise ratio threshold of 4.6 ± 4.2 dB S/N for speech recognition. The aided results were –3.3 ± 7.2 dB S/N by external bone conduction stimulation and –1.2 ± 4.0 dB S/N by the semi-implantable bone conduction device. The difference between the two devices was not statistically significant, while the difference was significant between unaided and aided situation for both devices. Both questionnaires for subjective benefit favored the semi-implantable device over external stimulation. We conclude that it is possible to simulate the result of speech recognition in noise with a semi-implantable transcutaneous bone conduction device by external stimulation. This should be part of preoperative counseling of patients with CHL/MHL before implantation of a bone conduction device.



from #Audiology via ola Kala on Inoreader http://ift.tt/2dPwMod
via IFTTT

Rapid Release From Listening Effort Resulting From Semantic Context, and Effects of Spectral Degradation and Cochlear Implants

People with hearing impairment are thought to rely heavily on context to compensate for reduced audibility. Here, we explore the resulting cost of this compensatory behavior, in terms of effort and the efficiency of ongoing predictive language processing. The listening task featured predictable or unpredictable sentences, and participants included people with cochlear implants as well as people with normal hearing who heard full-spectrum/unprocessed or vocoded speech. The crucial metric was the growth of the pupillary response and the reduction of this response for predictable versus unpredictable sentences, which would suggest reduced cognitive load resulting from predictive processing. Semantic context led to rapid reduction of listening effort for people with normal hearing; the reductions were observed well before the offset of the stimuli. Effort reduction was slightly delayed for people with cochlear implants and considerably more delayed for normal-hearing listeners exposed to spectrally degraded noise-vocoded signals; this pattern of results was maintained even when intelligibility was perfect. Results suggest that speed of sentence processing can still be disrupted, and exertion of effort can be elevated, even when intelligibility remains high. We discuss implications for experimental and clinical assessment of speech recognition, in which good performance can arise because of cognitive processes that occur after a stimulus, during a period of silence. Because silent gaps are not common in continuous flowing speech, the cognitive/linguistic restorative processes observed after sentences in such studies might not be available to listeners in everyday conversations, meaning that speech recognition in conventional tests might overestimate sentence-processing capability.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2dPwgXl
via IFTTT

Time-Varying Distortions of Binaural Information by Bilateral Hearing Aids: Effects of Nonlinear Frequency Compression

In patients with bilateral hearing loss, the use of two hearing aids (HAs) offers the potential to restore the benefits of binaural hearing, including sound source localization and segregation. However, existing evidence suggests that bilateral HA users’ access to binaural information, namely interaural time and level differences (ITDs and ILDs), can be compromised by device processing. Our objective was to characterize the nature and magnitude of binaural distortions caused by modern digital behind-the-ear HAs using a variety of stimuli and HA program settings. Of particular interest was a common frequency-lowering algorithm known as nonlinear frequency compression, which has not previously been assessed for its effects on binaural information. A binaural beamforming algorithm was also assessed. Wide dynamic range compression was enabled in all programs. HAs were placed on a binaural manikin, and stimuli were presented from an arc of loudspeakers inside an anechoic chamber. Stimuli were broadband noise bursts, 10-Hz sinusoidally amplitude-modulated noise bursts, or consonant–vowel–consonant speech tokens. Binaural information was analyzed in terms of ITDs, ILDs, and interaural coherence, both for whole stimuli and in a time-varying sense (i.e., within a running temporal window) across four different frequency bands (1, 2, 4, and 6 kHz). Key findings were: (a) Nonlinear frequency compression caused distortions of high-frequency envelope ITDs and significantly reduced interaural coherence. (b) For modulated stimuli, all programs caused time-varying distortion of ILDs. (c) HAs altered the relationship between ITDs and ILDs, introducing large ITD–ILD conflicts in some cases. Potential perceptual consequences of measured distortions are discussed.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2cXcdQv
via IFTTT

A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments

Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2dPxp14
via IFTTT

Subjective Listening Effort and Electrodermal Activity in Listening Situations with Reverberation and Noise

Disturbing factors like reverberation or ambient noise can impair speech recognition and raise the listening effort needed for successful communication in daily life. Situations with high listening effort are thought to result in increased stress for the listener. The aim of this study was to explore possible measures to determine listening effort in situations with varying background noise and reverberation. For this purpose, subjective ratings of listening effort, speech recognition, and stress level, together with the electrodermal activity as a measure of the autonomic stress reaction, were investigated. It was expected that the electrodermal activity would show different stress levels in different acoustic situations and might serve as an alternative to subjective ratings. Ten young normal-hearing and 17 elderly hearing-impaired subjects listened to sentences from the Oldenburg sentence test either with stationary background noise or with reverberation. Four listening situations were generated, an easy and a hard one for each of the two disturbing factors, which were related to each other by the Speech Transmission Index. The easy situation resulted in 100% and the hard situation resulted in 30 to 80% speech recognition. The results of the subjective ratings showed significant differences between the easy and the hard listening situations in both subject groups. Two methods of analyzing the electrodermal activity values revealed similar, but nonsignificant trends. Significant correlations between subjective ratings and physiological electrodermal activity data were observed for normal-hearing subjects in the noise situation.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2cXbhLU
via IFTTT

The Prediction of Speech Recognition in Noise With a Semi-Implantable Bone Conduction Hearing System by External Bone Conduction Stimulation With Headband: A Prospective Study

Semi-implantable transcutaneous bone conduction devices are treatment options for conductive and mixed hearing loss (CHL/MHL). For counseling of patients, realistic simulation of the functional result is desirable. This study compared speech recognition in noise with a semi-implantable transcutaneous bone conduction device to external stimulation with a bone conduction device fixed by a headband. Eight German-language adult patients were enrolled after a semi-implantable transcutaneous bone conduction device (Bonebridge, Med-El) was implanted and fitted. Patients received a bone conduction device for external stimulation (Baha BP110, Cochlear) fixed by a headband for comparison. The main outcome measure was speech recognition in noise (Oldenburg Sentence Test). Pure-tone audiometry was performed and subjective benefit was assessed using the Glasgow Benefit Inventory and Abbreviated Profile of Hearing Aid Benefit questionnaires. Unaided, patients showed a mean signal-to-noise ratio threshold of 4.6 ± 4.2 dB S/N for speech recognition. The aided results were –3.3 ± 7.2 dB S/N by external bone conduction stimulation and –1.2 ± 4.0 dB S/N by the semi-implantable bone conduction device. The difference between the two devices was not statistically significant, while the difference was significant between unaided and aided situation for both devices. Both questionnaires for subjective benefit favored the semi-implantable device over external stimulation. We conclude that it is possible to simulate the result of speech recognition in noise with a semi-implantable transcutaneous bone conduction device by external stimulation. This should be part of preoperative counseling of patients with CHL/MHL before implantation of a bone conduction device.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2dPwMod
via IFTTT

Circadian Rhythm and Hearing

Circadian rhythms represent physical, mental, and behavioral changes that follow a roughly 24-hour cycle. The master clock that controls circadian rhythms is called the suprachiasmatic nucleus (SCN). The circadian rhythm is endogenous, but also adjusted or entrained by local environmental cues (called zeitgebers, German for “time giver”); which include light, temperature, and redox cycles. 



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2cWmqNd
via IFTTT