Adversarial learning course -- IFT 6164 -- (previously Game Theory and ML course -- IFT 6756) -- Cours d'apprentissage automatique antagoniste

Mandatory registration to Piazza and Gradescope (More details to come and during the first lecture). registration link for Piazza
This class will be co-taught with Ioannis Mitligkas.
The class will be taught in English, (all slides and class notes are in English), you will have the possibility to have the homeworks and exams in French. Of course, you can always ask your questions in french.

IMPORTANT: A background in Deep Learning and Machine Learning is necessary. It is important to be confortable with Optimization. In order to evaluate if you have the sufficient backgroup you can try the Homework 0 before the beginning of the course. A background in, Reinforcement Learning and Algorithmic Game Theory may be a plus.

French Version

Inscription obligatoire à Piazza et Gradescope.
Le cours sera enseigné en anglais,(toutes les diapositives et notes de cours sont en anglais).

IMPORTANT: Des connaissances en apprentissage profond apprentiss`age automatique, optimisation sont nécessaires. Pour vérifier que vous avez les compétences requises vous pouvez essayer le devoir à la maison 0 avant le debut de la session. Des connaissances en apprentissage par renforcement et théorie des jeux algorithmiques peuvent être un plus.


The number of Machine Learning applications related to game theory has been growing in the last couple of years. For example, two-player zero-sum games are important for generative modeling (GANs) and mastering games like Go or Poker via self-play. This course is at the interface between game theory, optimization, and machine learning. It tries to understand how to learn models to play games. It will start with some quick notions of game theory to eventually delve into machine learning problems with game formulations such as GANs or Multi-agent RL. This course will also cover the optimization (a.k.a, training) of such machine learning games.

Un nombre grandissant d'applications d'apprentissage automatique liées à la théorie des jeux à vu le jour ces dernières années. Par exemple, les jeux à deux joueurs et à somme nulle sont importants pour la modélisation générative (GAN) et la maîtrise de jeux comme Go ou Poker via l'appentissage autonome. Ce cours est à l'interface entre la théorie des jeux, l'optimisation et l'apprentissage automatique. Il essaie de comprendre comment apprendre des modèles pour jouer à des jeux. Il commencera par quelques notions rapides de théorie des jeux pour finalement se plonger dans les problèmes d'apprentissage automatique avec des formulations de jeux telles que les GAN ou l'apprentissage par renforcement avec plusieurs agents. Ce cours couvrira également l'optimisation (a.k.a, entrainement) de tels jeux d'apprentissage automatique.

Schedule - Plan de cours

Due to the sanitary situation the course will be given in hybrid format (Remote until at least the 31st of January). The plan for the course is the following (the slides and lecture notes are from last year, they are just indicative of the topic covered but do not correspond to the slides that will be used): Special thanks to the students who scribed the lectures in 2021: David Dobre, Arnaud L'Heureux, Ivan Puhachov, François Mercier, Sharath Chandra, Mathieu Godbout, William Neveu, Olivier Tessier-Larivière, Tianyu Zhang, Francisco Gallegos, Albert Orozco Camacho, Martin Dallaire, Bharath Govindaraju, Justine Pepin, François David, François Milot, Jonathan Tremblay, Carl Perreault-Lafleur, Mojtaba Faramarzi, Pascal Jutras-Dubé, Arnold (Zicong) Mo, Uros Petricevic, Olivier Ethier, Nehal Pandey, Harsh Kumar Roshan, Sree Rama Sanjeev, Rupali Bhati, Sharath Chandra and Kavin Patel.


Midterm (20%)+ Final (30%) + Paper presentation (30%) + Homeworks (20%).

Examen de mi-session (20%) + examen final (30%) + Presentation de papiers (30%) + devoirs a la maison (20%).

Relevant references

  • GANs:,
  • Big GAN:
  • WGAN:
  • WGAN-GP:
  • Poker:,
  • Diplomacy:,,
  • Hanabi:,
  • StarCraft II:
  • AlphaGo:
  • AlphaGo zero:
  • Alpha zero:
  • Open-ended learning:
  • Spinning-top:
  • Unified framework:
  • Re-evaluating Evaluation:
  • Lola
  • Numerics of GANs:
  • Optimistic methods:
  • Extragradient methods:,
  • Negative momentum:
  • Optimal convergence rates in games:
  • Noise in Games:
  • A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning:
  • Mean Field Multi-Agent Reinforcement Learning
  • Adversarial Training:

Potential Projects (from 2021)

Critic of a paper --- Critique d'un papier (Experimental)

This kind of project, is about reproducing a research paper, criticizing their experimental method/results, and proposing an ablation study or new experiments that has not been done in the paper. One of the paper in the list above may for instance good candiate for such a project.

Reproduction d'un papier, puis critique de la méthode experimentale proposée. Enfin le projet doit se conclure par la proposition d'une nouvelle experience pour completer le papier.

Projects List --- Liste de projets

Read a paper from the list and send me an email for more details on the project:
  • Improving the lower bounds from:
  • Last iterate convergence for Stochastic extragradient method:
  • Re-evaluating evaluation of Multi-player games:
  • Adversarial example Games for the theory of adversarial examples:
  • Law of robustness for Neural Networks:
  • New convergence rates for Alternated Gradient Method (extending the ones in )
  • Primal-Dual optimization in RL: New alorithm to solve the minimax formulation proposed in
  • Adversarial Example Games for Adversarial training:
  • Try some optimizers (ExtraAdam, Negative Momentum) on text data

Propose your Own Project (higher risk but it will be taken into account) --- Proposition d'un projet personnel (plus risqué. Cela sera pris en compte dans l'évaluation)

The stage of the extended abstract is very important because the proposal might be refused at mid-term if it is judged that it does not fit the standard of the course.