Best AI papers explained
A podcast by Enoch H. Kang
515 Episodes
-
Richard Sutton Declares LLMs a Dead End
Published: 20/10/2025 -
Demystifying Reinforcement Learning in Agentic Reasoning
Published: 19/10/2025 -
Emergent coordination in multi-agent language models
Published: 19/10/2025 -
Learning-to-measure: in-context active feature acquisition
Published: 19/10/2025 -
Andrej Karpathy's insights: AGI, Intelligence, and Evolution
Published: 19/10/2025 -
Front-Loading Reasoning: The Synergy between Pretraining and Post-Training Data
Published: 18/10/2025 -
Representation-Based Exploration for Language Models: From Test-Time to Post-Training
Published: 18/10/2025 -
The attacker moves second: stronger adaptive attacks bypass defenses against LLM jail- Breaks and prompt injections
Published: 18/10/2025 -
When can in-context learning generalize out of task distribution?
Published: 16/10/2025 -
The Art of Scaling Reinforcement Learning Compute for LLMs
Published: 16/10/2025 -
A small number of samples can poison LLMs of any size
Published: 16/10/2025 -
Dual Goal Representations
Published: 14/10/2025 -
Welcome to the Era of Experience
Published: 14/10/2025 -
Value Flows: Flow-Based Distributional Reinforcement Learning
Published: 14/10/2025 -
Self-Adapting Language Models
Published: 12/10/2025 -
The Markovian Thinker
Published: 12/10/2025 -
Moloch’s Bargain: emergent misalignment when LLMs compete for audiences
Published: 12/10/2025 -
Transformer Predictor Dynamics and Task Diversity
Published: 11/10/2025 -
Base models know how to reason, thinking models learn when
Published: 11/10/2025 -
Spectrum tuning: Post-training for distributional coverage and in-context steerability
Published: 11/10/2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
