English  |  Français 

Can neural oscillations help segment speech ?

Principal investigators: Alexandre Hyafil, Lorenzo Fontolan, Anne-Lise Giraud, Boris Gutkin

Many biological stimuli present a quasi-rhythmic structure that occurs simultaneously across different timescales, yet it is unclear how the brain decomposes such multiplexed sensory input. Speech is a paradigmatic example of such multiplexed signals, where information is present at phonemic (20-40 Hz), syllabic (2-10 Hz) and prosodic time scales (<2 Hz). Strikingly, neural oscillations in the corresponding time scales, notably theta (4-8 Hz) and low gamma (30-50), have been recorded in the auditory cortex during speech comprehension. Could these oscillations support decomposition of the speech stimuli into syllables and phonemes? We  approach this question by building a neuronal model of auditory cortical microcircuits to show how coupled generators of theta and gamma oscillations could indeed mediate such de-multiplexing of the speech signal. Our reults so far show that theta-spiking oscillations can track the syllabic rhythm in speech, and temporally organize the response of gamma neurons into a code that can be deciphered. Nesting of gamma-spiking oscillations by the theta rhythm appears to be mandatory for accurate encoding. Similar to human speech recognition performance, the resulting oscillation-based code is resilient to the speaker’s rate. Our modelling results pave the way to understanding  how nested cortical oscillations, as observed in the human auditory cortex, represent a viable instrument of speech de-multiplexing, parsing, and encoding.

INSERM ENS