About Me
I am currently a research scientist at Meta, where I am working on recommendation systems.
Previously, I was a PhD student at Princeton University working on theoretical analyses of meta-learning, where I was advised by Jason D. Lee. Before then, I studied EE/CS at UC Berkeley, where I worked on model-based reinforcement learning together with Sergey Levine, Roberto Calandra, and Rowan McAllister.
Publications
-
Provable Hierarchy-Based Meta-Reinforcement Learning
Kurtland Chua, Qi Lei, Jason D. Lee.
AISTATS 2023. -
How Fine-Tuning Allows for Effective Meta-Learning.
Kurtland Chua, Qi Lei, Jason D. Lee.
NeurIPS 2021. -
On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning.
Baohe Zhang, Raghu Rajan, Luis Pineda, Nathan Lambert, André Biedenkapp, Kurtland Chua, Frank Hutter, Roberto Calandra.
AISTATS 2021. -
Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models.
website | code
Kurtland Chua, Roberto Calandra, Rowan McAllister, Sergey Levine.
NeurIPS 2018 (Spotlight presentation, ~4% of submitted papers).
Talks
- “Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models.” Bay Area Machine Learning Symposium (Baylearn). October 2018.
video
Teaching
-
COS 435: Introduction to Reinforcement Learning, Princeton
Assistant in Instruction
Spring 2024 -
COS 240: Reasoning about Computation, Princeton
Assistant in Instruction
Fall 2023 -
EECS 126: Probability and Random Processes, UC Berkeley
Undergraduate Student Instructor (uGSI)
Spring 2019 | Fall 2018
Honors and Awards
- National Science Foundation Graduate Research Fellowship (2019).
- Gordon Y.S. Wu Fellowship in Engineering (2019).
- EECS Major Citation (2019).
- NVIDIA Pioneer Award (2018). Awarded for Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models at NeurIPS 2018.
- Phi Beta Kappa Honors Society (2018). Inducted as a junior.