About Me

I am a Ph.D. student in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, working on model-based control for robotics with Dr. Byron Boots as part of the Robot Learning Lab and machine learning for neuroscience with Dr. Matthew Golub. Prior to transferring to the University of Washington, I was a Ph.D. student at Georgia Tech, from which I received my M.S. degree in Electrical and Computer Engineering. And before that, I received by B.S. in Biomedical Engineering at the University of Texas at Austin.

When not doing research, I spend my time playing guitar, composing music (mostly progressive rock/metal), and playing the same old JRPG games over and over again.

Research Interests

I am broadly interested in learning and control in both robots and brains, as well as robotics for brains (such as brain-machine interfaces and prosthetics) and brains for robots (biologically-inspired algorithms and architectures). My work largely draws on tools in deep learning, optimal control, reinforcement learning, graphical models, and computational neuroscience. Some overarching research goals of mine are to:

  1. Understand what the right amount of prior structure and domain knowledge is to incorporate in learning and control algorithms and how to optimally leverage it to improve the speed of learning and generalization

  2. Investigate how population-level neural computations perform decision making and generate movements, as well as how neurons co-adapt their behavior during learning and the underlying rules which guide this process

  3. Translate these findings to solve real-world problems in robotics, neuroscience, and medicine

On the robotics side, my focus has been on integrating learned components within the framework of model predictive control (MPC) to improve controller performance and reduce computational requirements. In the neuroscience domain, I am applying recurrent neural networks as a test-bed to investigate these questions and analyzing recordings of neural populations using deep generative models.

News

[03/2023] Successfully passed my Generals Exam (the PhD proposal process at UW CSE)!

[12/2022] Presented our paper on Learning Sampling Distributions for Model Predictive Control at the Conference on Robot Learning (CoRL) in Auckland, New Zealand! [Paper Link]

[05/2022] Presented our paper on Learning to Optimize in Model Predictive Control at the International Conference on Robotics and Automation (ICRA) in Philly! [Paper Link]

[01/2022] Followed my advisor and transferred to the University of Washington.

[06/2019] Presented our paper on An Online Learning Approach to Model Predictive Control at the Robotics: Science and Systems (RSS) conference in Freiberg, Germany! [Paper Link]

[12/2018] Presented our paper on Differentiable MPC for End-to-End Planning and Control at the Neural Information Processing Systems (NeurIPS) conference in Montreal! [Paper Link]