Abstract
It is well-known that standard methods for sparse linear regression such as the LASSO generally perform worse as input features in the data become more strongly correlated, even though other (computationally inefficient!) estimators do not. For data generated by simple linear structural equation models this can lead the LASSO to have suboptimal performance. We show that under a low-treewidth condition on the graphical structure of the covariates, there exists a preconditioner which can 'fix' the performance of the LASSO to be nearly statistically optimal; the condition is also shown to be necessary in a certain sense. Based on a joint work with Jon Kelner, Raghu Meka, Dhruv Rohatgi.