Abstract

Score-based losses have emerged as a more computationally appealing alternative to maximum likelihood for fitting (probabilistic) generative models with an intractable likelihood (for example, energy-based models and diffusion models). What is gained by foregoing maximum likelihood is a tractable gradient-based training algorithm. What is lost is less clear: in particular, since maximum likelihood is asymptotically optimal in terms of statistical efficiency, how suboptimal are score-based losses? I will survey a recently developing connection relating the statistical efficiency of broad families of generalized score losses, to the algorithmic efficiency of a natural inference-time algorithm: namely, the mixing time of a suitable diffusion using the score that can be used to draw samples from the model. This “dictionary” allows us to elucidate the design space for score losses with good statistical behavior, by “translating” techniques for speeding up Markov chain convergence (e.g., preconditioning and lifting). I will briefly also touch upon a parallel story for learning discrete probability distributions, in which the "analogue" of score-based losses is played by masked prediction-like losses. Finally, I will end with an outlook of theory for generative models more broadly, both in the short- and long-term.

Video Recording