Séminaire de Topologie et Géométrie — semestre de automne 2022

Le séminaire de Topologie et Géométrie de la section de mathématiques de l'Université de Genève a lieu le jeudi de 15:00 à 16:00. Il se déroule en salle 1-15, Section de mathématiques, rue du Conseil-Général 7-9.

The Topology and Geometry seminar of the mathematics department of the University of Geneva takes place on Thursdays from 15:00 to 16:00. It takes place in room 1-15, Section de mathématiques, rue du Conseil-Général 7-9.

Exposés


17/11/2022
  • Orateur: Sakie Suzuki (Keio University).
  • Titre: Quantum invariants of closed framed 3-manifolds based on ideal triangulations
  • Résumé: We construct a new type of quantum invariant of closed framed 3-manifolds with the vanishing first Betti number. The invariant is defined for any finite dimensional Hopf algebra, such as small quantum groups, and is based on ideal triangulations. We use the canonical element of the Heisenberg double, which satisfies a pentagon equation, and graphical representations of 3-manifolds introduced by R. Benedetti and C. Petronio. The construction is simple and easy to be understood intuitively; the pentagon equation reflects the Pachner (2, 3) move of ideal triangulations and the non-involutiveness of the Hopf algebra reflects framings. This is a joint work with S. M. Mihalache and Y. Terashima.

08/12/2022
changement d'heure 13h15 - 14h15
  • Orateur: Benoit Dherin (Google).
  • Titre: Deep learning basics and the problem of implicit regularization
  • Résumé: In the first part of this talk, we will recall the building blocks of deep learning, framing the learning problem as an optimization problem solved in practice by gradient descent. This first part will be very accessible and self-contained. Then we will attempt to convey how surprising it is that deep learning works so well given the extreme complexity of its solution space, pointing toward the existence of an implicit regularization mechanism self-selecting the simpler solutions that generalize best ahead of the more complex ones that do not perform well. At last, we will outline a recent approach attempting to uncover such an implicit regularization mechanism based on the backward error analysis of the gradient descent scheme.

15/12/2022
changement d'heure 13h15 - 14h15
  • Orateur: Benoit Dherin (Google).
  • Titre: Why neural networks find (geometrically) simple solutions
  • Résumé: We will start by defining a notion of geometric complexity for neural networks based on intuitive notions of volume and energy. This will be motivated by the visualization of training sequences in the case of simple 1d neural regressions. Then we will explain why for neural networks the optimization process creates a pressure to keep the network geometric complexity low. Additionally, we will see that many other common heuristics in the training of neural networks (from initialization schemes to explicit regularization strategies) have as a side effect to also keep the geometric complexity of the learned solutions low. We will conclude by explaining how this points toward a preference toward a form of harmonic map built in the commonly used training and tuning heuristics in deep learning.