Best AI papers explained
A podcast by Enoch H. Kang
525 Episodes
-
Why in-context learning models are good few-shot learners?
Published: 17/06/2025 -
Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina∗
Published: 14/06/2025 -
The Logic of Machines: The AI Reasoning Debate
Published: 12/06/2025 -
Layer by Layer: Uncovering Hidden Representations in Language Models
Published: 12/06/2025 -
Causal Attribution Analysis for Continuous Outcomes
Published: 12/06/2025 -
Training a Generally Curious Agent
Published: 12/06/2025 -
Estimation of Treatment Effects Under Nonstationarity via Truncated Difference-in-Q’s
Published: 12/06/2025 -
Strategy Coopetition Explains the Emergence and Transience of In-Context Learning
Published: 12/06/2025 -
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
Published: 11/06/2025 -
Agentic Supernet for Multi-agent Architecture Search
Published: 11/06/2025 -
Sample Complexity and Representation Ability of Test-time Scaling Paradigms
Published: 11/06/2025 -
Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators
Published: 10/06/2025 -
LLMs Get Lost In Multi-Turn Conversation
Published: 9/06/2025 -
PromptPex: Automatic Test Generation for Prompts
Published: 8/06/2025 -
General Agents Need World Models
Published: 8/06/2025 -
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models
Published: 7/06/2025 -
Decisions With Algorithms
Published: 7/06/2025 -
Adapting, fast and slow: Causal Approach to Few-Shot Sequence Learning
Published: 6/06/2025 -
Conformal Arbitrage for LLM Objective Balancing
Published: 6/06/2025 -
Simulation-Based Inference for Adaptive Experiments
Published: 6/06/2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
