Avatar

Aldo Pacchiano

Eric and Wendy Schmidt Center Fellow / Faculty

Broad Institute / Boston University

I am affiliated with the Broad Institute of MIT and Harvard and the Boston University Center for Computing and Data Sciences. I obtained my PhD at UC Berkeley where I was advised by Peter Bartlett and Michael Jordan. My research lies in the areas of Reinforcement Learning, Online Learning, Bandits and Algorithmic Fairness. I am particularly interested in furthering our statistical understanding of learning phenomena in adaptive environments and use these theoretical insights and techniques to design efficient and safe algorithms for scientific, engineering, and large-scale societal applications.

The Pacchiano’s Lab for Adaptive and Intelligent Algorithms (PLAIA) site can be found here.

A sample of my literary writings, including short stories and notes in both English and Spanish, can be found here.

Interests

  • Online Learning
  • Reinforcement Learning
  • Deep Reinforcement Learning
  • Algorithmic Fairness

Education

  • PhD in Computer Science, 2021

    University of California Berkeley

  • MEng in Computer Science, 2014

    Massachusetts Institute of Technology

  • Masters of Advance Study in Pure Mathematics, 2013

    Cambridge University

  • Bachelors of Science in Computer Science and Theoretical Mathematics, 2012

    Massachusetts Institute of Technology

Recent Posts

Neural Optimism for Genetic Perturbation Experiments

This work provides a theoretically sound framework for iteratively exploring the space of perturbations in pooled batches in order to …

Model Selection for Contextual Bandits and Reinforcement Learning

In the problem of model selection the objective is to design ways to select in online fashion the best suitable algorithm to solve a …

Beyond the Standard Assumptions in Reinforcement Learning

In Reinforcement Learning it is standard to assume the reward to be an additive function of per state feedback. In this work we …

Publications

Experiment Planning with Function Approximation

NeurIPS 2023, also presented at the PAC-Bayes Meets Interactive Learning Workshop, ICML 2023.

Anytime Model Selection in Linear Bandits

NeurIPS 2023, also presented at the PAC-Bayes Meets Interactive Learning Workshop, ICML 2023.

Supervised Pretraining Can Learn In-Context Reinforcement Learning

NeurIPS 2023, also presented at the New Frontiers in Learning, Control, and Dynamical Systems Workshop, ICML 2023.

Transfer RL via the Undo Maps Formalism

Presented at the New Frontiers in Learning, Control, and Dynamical Systems Workshop, ICML 2023.

Meta Learning MDPs with Linear Transition Models

AISTATS 2022; also presented in the Workshop on Reinforcement Learning Theory, ICML 2021.

On the Theory of Reinforcement Learning with Once-per-Episode Feedback

NeurIPS 2021; also presented as an oral talk in the Workshop on Reinforcement Learning Theory, ICML 2021.