Séminaire de recherche

psycholinguistics seminars

The psycholinguistics lab seminars take place on Mondays at 12h15. 

Please note that all the seminars will be broadcasted via Zoom, contact Tanja Atanasova if you would like to join.

Next seminar:
Monday, October 3rd 2022

"An Entropy and Noisy-Channel Model for Rule Induction"

Dr. Silvia Radulescu

What triggers the inductive steps from memorizing specific items and combinations of items, to inferring rules (or statistical patterns) between these specific items, and also to forming categories and generalizations that apply to categories of items? 

We propose an innovative information-theoretic model both for learning statistical regularities and generalizing to new input. Based on Shannon's noisy-channel coding theory (Shannon, 1948), our entropy model hypothesizes that rule induction (generalization) is an encoding mechanism gradually driven by the dynamics between an external factor – input entropy – and an internal factor – channel capacity

Input entropy quantifies (in bits of information) the statistical properties of the linguistic input, given by the number of items and their probability distribution. Channel capacity is an information-theoretic measure of the encoding capacity of our brain, and is determined by the amount of entropy that can be encoded per second.

Specifically, in information-theoretic terms, if the input entropy per second is below or matches the channel capacity, the information about specific items and relations between them can be encoded with high-fidelity item specificity by item-bound generalization, at the channel rate (i.e. channel capacity). If the input entropy per second is higher than the channel capacity, then this high-specificity form of encoding becomes prone to errors, due to noise interference. So the form of encoding becomes inefficient, due to loss of information. Thus, the form of encoding is gradually shaped into a high-generality form of encoding – category-based generalization, in order to avoid exceeding the channel capacity.

I will present the results of three artificial grammar experiments (with adults) that tested this model, and aimed at better understanding the generalization mechanism and the type of generalizations that language learners make. Taken together, these results speak to the validity and wide application of this entropy model for the cognitive process of generalization.

Seminars 2022/2023

 DATE  SPEAKER  TITLE  HOST

 03.10.2022

 Dr. Silvia Radulescu

 University of Utrecht

 An Entropy and Noisy-Channel Model for Rule Induction  Samuel Schmid

 31.10.2022

 Prof. Valentina Borghesani
 University of Geneva

 TBA

 Tanja Atanasova

07.11.2022 

 Dr. Emily Hunt

 Edith Cowan University

 TBA

 Olivia Hadjadj

 21.11.2022

 Prof. Sébastien Pacton

University Paris Descartes

 TBA  Estelle Ardanouy

 05.12.2022

 Prof. Anne Keitel

University of Dundee

 TBA  Tanja Atanasova

 19.12.2022

 Dr. Sandra Villata

NYU

 TBA  Julie Franck

 16.01.2023

 TBA

 

 

 30.01.2023

 TBA

   

 06.02.2023

 TBA

 

 

 20.02.2023

 TBA

   

 06.03.2023

 TBA

 

 

 20.03.2023

 TBA

   

 03.04.2023

 TBA

   

 17.04.2023

 TBA

 

 

 01.05.2023

 TBA

 

 

15.05.2023

 TBA

 

 

 05.06.2023

 TBA

 

 

 19.06.2023

 TBA

 

 

 

ARCHIVES:

2021-2022

2020-2021

2019-2020

2018-2019

2017-2018

Fall 2016

Spring 2016

Fall 2015 

For general information, please contact Tanja Atanasova