In this work we develop the technique of optimism regularization, a simple way of inducing optimistic predictions in NN models. I show this simple technique, inspired by the theoretical requirement of optimism handily beats non-optimistic exploration strategies in the setting of genetic perturbation experiments, and other practical online data acquisition problems. We show how to use these methodologies to introduce algorithms for batch learning in the presence of NN function approximators. We also characterize the complexity of batch learning using the Eluder dimension