Abstract
We present some theoretical contributions towards understanding fundamental bounds on the performance of machine learning algorithms in the presence of adversaries. We shall discuss how optimal transport emerges as a natural mathematical tool to characterize robust risk. We shall also present new reverse isoperimetric inequalities from geometry that can be used to provide theoretical bounds on the sample size of estimating robust risk.