Iām an researcher currently at OpenAI. My research interests are in universal learning mechanisms that work at scale. I graduated from UC Berkeley in 2021 where I worked with Pieter Abbeel and Igor Mordatch on reinforcement learning and sequence modeling.
Email: kzl at berkeley dot edu
Experience
2022-2023 ā Hudson River Trading AI Labs
I worked on applied AI research for high-frequency trading, studying deep learning in a noisy, low-latency setting.
2021-2022 ā Facebook AI Research
I worked on fundamental AI research studying how we could scale Decision Transformer with new capabilities, such as multi-task, in-context learning, and online exploration.
2018-2021 ā UC Berkeley
My research focused on the universal power of sequence modeling and how to leverage offline data to for reinforcement learning.
- Co-first author of Decision Transformer
- First author of Pretrained Transformers as Universal Computation Engines
- Supporting author on several other papers (Google Scholar)
- Reviewer for ICLR, NeurIPS, ICML
- Other highlights: Towards a Universal Decision Making Paradigm
I also spent a significant portion of my undergrad life teaching.
- Head TA for EECS 126 (Probability and Random Processes) in Fall 2020 and Spring 2021
- TA for EECS 126 in Fall 2019 and Spring 2020
Other
This website template is taken from maximevaillancourt/digital-garden-jekyll-template.