Aldo Pacchiano

Postdoctoral Researcher

Microsoft Research

I am a Postdoctoral Researcher at Microsoft Research NYC. I obtained my PhD at UC Berkeley where I was advised by Peter Bartlett and Michael Jordan. My research lies in the areas of Reinforcement Learning, Online Learning, Bandits and Algorithmic Fairness. I am particularly interested in furthering our statistical understanding of learning phenomena in adaptive environments and use these theoretical insights and techniques to design efficient and safe algorithms for scientific, engineering, and large-scale societal applications.

Later this year I will be joining the Broad Institute of MIT and Harvard to work on novel mathematical, computational and algorithmic paradigms for biomedical research. Subsequently I will start as an Assistant Professor at the Boston University Center for Computing and Data Sciences.


  • Online Learning
  • Reinforcement Learning
  • Deep Reinforcement Learning
  • Algorithmic Fairness


  • PhD in Computer Science, 2021

    University of California Berkeley

  • MEng in Computer Science, 2014

    Massachusetts Institute of Technology

  • Masters of Advance Study in Pure Mathematics, 2013

    Cambridge University

  • Bachelors of Science in Computer Science and Theoretical Mathematics, 2012

    Massachusetts Institute of Technology

Recent Posts

Neural Optimism for Genetic Perturbation Experiments

This work provides a theoretically sound framework for iteratively exploring the space of perturbations in pooled batches in order to …

Model Selection for Contextual Bandits and Reinforcement Learning

In the problem of model selection the objective is to design ways to select in online fashion the best suitable algorithm to solve a …

Beyond the Standard Assumptions in Reinforcement Learning

In Reinforcement Learning it is standard to assume the reward to be an additive function of per state feedback. In this work we …


Meta Learning MDPs with Linear Transition Models

AISTATS 2022; also presented in the Workshop on Reinforcement Learning Theory, ICML 2021.

On the Theory of Reinforcement Learning with Once-per-Episode Feedback

NeurIPS 2021; also presented as an oral talk in the Workshop on Reinforcement Learning Theory, ICML 2021.