Recent work in differential privacy has shown how to build provably reusable holdout sets and tamper-proof machine learning competitions

http://arxiv.org/abs/1411.2664

http://arxiv.org/abs/1502.04585

These papers look at the problem of preventing a malicious adversary from overfitting to data. But the question remains: if I know that the machine learning competition that I am entering is using one of these mechanisms, what is the optimal strategy for me as a machine learning practitioner to yield the lowest possible error on a test set? Such a model would be guaranteed to generalize, and would have nearly optimal performance subject to generalizability. Concretely: what are the best optimization schemes to use for parameter tuning and model selection, given that one is faced with such a differentially private adversary?