Research topic

Speech has been shaped by more than 200000 years of mutual interaction between the articulatory motor system and the auditory system throughout the evolution of language. The human auditory cortex is hence highly tuned to speech, in particular owing to its intrinsic neural activity that resonates with the articulated rhythms present in the acoustic signal. Our main goal is to understand how continuous speech is segmented by the human auditory system, encoded by the cortex into linguistic units, such as phonemes, syllables and words, and re-coded as an articulatory motor program. We are specifically interested in the role of cortical oscillations in these processes, as they reflect collective neural activity at different temporal and spatial scales, and underpin specific neural computations. We explore the interactions between neural oscillations at different frequencies during speech processing, and the anomalies of these interactions in pathologies of language development, such as dyslexia and autism spectrum disorder, as well as in dysfunctional auditory modes, such as during auditory hallucinations in schizophrenia. We are also interested in the plasticity of oscillation-based processes in deafness and after auditory rehabilitation with cochlear implants.

We combine an experimental approach involving invasive and non-invasive electrophysiology in healthy and diseased humans (MEG, EEG, i-EEG), with a theoretical approach that consists in building biologically realistic models of speech processing. We also intervene to normalize dysfunctions of oscillatory processes in language pathologies using soft methods of transcranial electrical stimulation (TACS).