Abstract
Normalizing Flows provide a powerful framework for high dimensional density distribution (likelihood) and sampling by learning a bijective transformation that is invertible and has a tractable Jacobian. I will first present Sliced Normalizing Flow (SNF), which is based on Sliced Optimal Transport and uses sliced Wasserstein distance optimization to learn the data likelihood from the samples. The method achieves state of the art performance on the likelihoods for small data, as well as realistic generative samples of high dimensional images. In the second half of the talk I will apply SNF to global optimization and sampling problems using an annealing strategy, where at each temperature we use SNF optimization to learn the likelihoods and draw new samples. In the context of global optimization the method is an alternative to Bayesian Optimization and other global optimization strategies. It uses SNF to develop an acquisition function that uses both unweighted and weighted likelihoods for exploration and exploitation, respectively. In the context of sampling the goal of the method is to obtain a normalized posterior and can be viewed as an alternative to MCMC. As it is not based on a Markov Chain, the samples at each annealing temperature can be drawn in parallel, and the method has the promise of being significantly faster than MCMC. I will present some examples where this promise is achieved, as well as the remaining challenges and future directions.