- Excited to release two papers from my internship at Meta!
- We discover a new scaling law phenomenon! Compute optima for knowledge favor larger models, whereas those for reasoning favor more data.
- We release Meta MLGym: a framework for benchmarking and developing agents for AI research!
- I am extremely fortunate to have received an honorable mention for the Jane Street Graduate Research Fellowship! Thank you, Jane Street!
- At NeurIPS ‘24, we improve Weak Supervision benchmarking and show that WS is stronger than was thought in prior work
- I moved to London for my internship with the Llama pretraining team 🦙 at Meta, working on scaling laws and skills!
- I moved to San Francisco to intern at Together AI 🐍, working on hybrid LLMs!