Séminaire de recherche

psycholinguistics seminars

The psycholinguistics lab seminars take place on Mondays at 12h15. 

Please note that all the seminars will be broadcasted via Zoom, contact Tanja Atanasova if you would like to join.

Next seminar:
Monday, October 3rd 2022

"An Entropy and Noisy-Channel Model for Rule Induction"

Dr. Silvia Radulescu

What triggers the inductive steps from memorizing specific items and combinations of items, to inferring rules (or statistical patterns) between these specific items, and also to forming categories and generalizations that apply to categories of items? 

We propose an innovative information-theoretic model both for learning statistical regularities and generalizing to new input. Based on Shannon's noisy-channel coding theory (Shannon, 1948), our entropy model hypothesizes that rule induction (generalization) is an encoding mechanism gradually driven by the dynamics between an external factor – input entropy – and an internal factor – channel capacity

Input entropy quantifies (in bits of information) the statistical properties of the linguistic input, given by the number of items and their probability distribution. Channel capacity is an information-theoretic measure of the encoding capacity of our brain, and is determined by the amount of entropy that can be encoded per second.

Specifically, in information-theoretic terms, if the input entropy per second is below or matches the channel capacity, the information about specific items and relations between them can be encoded with high-fidelity item specificity by item-bound generalization, at the channel rate (i.e. channel capacity). If the input entropy per second is higher than the channel capacity, then this high-specificity form of encoding becomes prone to errors, due to noise interference. So the form of encoding becomes inefficient, due to loss of information. Thus, the form of encoding is gradually shaped into a high-generality form of encoding – category-based generalization, in order to avoid exceeding the channel capacity.

I will present the results of three artificial grammar experiments (with adults) that tested this model, and aimed at better understanding the generalization mechanism and the type of generalizations that language learners make. Taken together, these results speak to the validity and wide application of this entropy model for the cognitive process of generalization.

Seminars 2022/2023



 Dr. Silvia Radulescu

 University of Utrecht

 An Entropy and Noisy-Channel Model for Rule Induction  Samuel Schmid


 Prof. Valentina Borghesani
 University of Geneva


 Tanja Atanasova


 Dr. Emily Hunt

 Edith Cowan University


 Olivia Hadjadj


 Prof. Sébastien Pacton

University Paris Descartes

 TBA  Estelle Ardanouy


 Prof. Anne Keitel

University of Dundee

 TBA  Tanja Atanasova


 Dr. Sandra Villata


 TBA  Julie Franck




















































Fall 2016

Spring 2016

Fall 2015 

For general information, please contact Tanja Atanasova