Abstract
This talk aims to survey the triple-point interface between optimization, game theory, and dynamical systems. We will begin by discussing how the ordinary differential equation (ODE) method of stochastic approximation can be used to analyze the trajectories of a wide array of stochastic first-order algorithms in non-convex minimization problems and games (including extra-gradient and optimistic gradient methods). This methodology allows us to derive a range of conditions guaranteeing convergence to min-max stable points while avoiding unstable points and periodic orbits. However, despite these encouraging results, the overall situation can be considerably more involved, and the sequence of play may converge with arbitrarily high probability to lower-dimensional manifolds that are in no way unilaterally stable (or even stationary). "Spurious attractors" of this type can arise even in simple two-player zero-sum games with one-dimensional action sets and polynomial losses of degree 4, a fact which highlights the fundamental gap between minimization problems and games.