Abstract I will present a result on extracting latent, interpretable classifiers from neural network models that have implicitly learned decision trees. I will also present some open directions that I would love to tackle at Simons.