- Viagra
- Sildenafil Citrate (TP)
- Sildenafil Citrate TEVA
- Sildenafil Citrate (GS)
- Tadalafil TEVA
- Tadalafil ACCORD
- Tadalafil DAILY
- Vardenafil TEVA
- Vardenafil ZYDUS
- Cialis
Biologists Locate Brain's Processing Point for Acoustic Signals Essential to Human Communication
2012-03-09
|
In both animals and humans, vocal signals used for communication contain a wide array of different sounds that are determined by the vibrational frequencies of vocal cords. For example, the pitch of someone's voice, and how it changes as they are speaking, depends on a complex series of varying frequencies. Knowing how the brain sorts out these different frequencies -- which are called frequency-modulated (FM) sweeps -- is believed to be essential to understanding many hearing-related behaviors, like speech. Now, a pair of biologists at the California Institute of Technology (Caltech) has identified how and where the brain processes this type of sound signal.
Their findings are outlined in a paper published in the March 8 issue of the journal Neuron.
Knowing the direction of an FM sweep -- if it is rising or falling, for example -- and decoding its meaning, is important in every language. The significance of the direction of an FM sweep is most evident in tone languages such as Mandarin Chinese, in which rising or dipping frequencies within a single syllable can change the meaning of a word.
In their paper, the researchers pinpointed the brain region in rats where the task of sorting FM sweeps begins.
"This type of processing is very important for understanding language and speech in humans," says Guangying Wu, principal investigator of the study and a Broad Senior Research Fellow in Brain Circuitry at Caltech. "There are some people who have deficits in processing this kind of changing frequency; they experience difficulty in reading and learning language, and in perceiving the emotional states of speakers. Our research might help us understand these types of disorders, and may give some clues for future therapeutic designs or designs for prostheses like hearing implants."
The researchers -- including co-author Richard I. Kuo, a research technician in Wu's laboratory at the time of the study (now a graduate student at the University of Edinburg) -- found that the processing of FM sweeps begins in the midbrain, an area located below the cerebral cortex near the center of the brain -- which, Wu says, was actually a surprise.
"Some people thought this type of sorting happened in a different region, for example in the auditory nerve or in the brain stem," says Wu. "Others argued that it might happen in the cortex or thalamus. "
To acquire high-quality in-vivo measurements in the midbrain, which is located deep within the brain, the team designed a novel technique using two paired -- or co-axial -- electrodes. Previously, it had been very difficult for scientists to acquire recordings in hard-to-access brain regions such as the midbrain, thalamus, and brain stem, says Wu, who believes the new method will be applicable to a wide range of deep-brain research studies.
In addition to finding the site where FM sweep selectivity begins, the researchers discovered how auditory neurons in the midbrain respond to these frequency changes. Combining physical measurements with computational models confirmed that the recorded neurons were able to selectively respond to FM sweeps based on their directions. For example, some neurons were more sensitive to upward sweeps, while others responded more to downward sweeps.
"Our findings suggest that neural networks in the midbrain can convert from non-selective neurons that process all sounds to direction-selective neurons that help us give meanings to words based on how they are spoken. That's a very fundamental process," says Wu.
Wu says he plans to continue this line of research, with an eye -- or ear -- toward helping people with hearing-related disorders. "We might be able to target this area of the midbrain for treatment in the near future," he says.