525 Episodes

  1. Why in-context learning models are good few-shot learners?

    Published: 17/06/2025
  2. Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina∗

    Published: 14/06/2025
  3. The Logic of Machines: The AI Reasoning Debate

    Published: 12/06/2025
  4. Layer by Layer: Uncovering Hidden Representations in Language Models

    Published: 12/06/2025
  5. Causal Attribution Analysis for Continuous Outcomes

    Published: 12/06/2025
  6. Training a Generally Curious Agent

    Published: 12/06/2025
  7. Estimation of Treatment Effects Under Nonstationarity via Truncated Difference-in-Q’s

    Published: 12/06/2025
  8. Strategy Coopetition Explains the Emergence and Transience of In-Context Learning

    Published: 12/06/2025
  9. Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs

    Published: 11/06/2025
  10. Agentic Supernet for Multi-agent Architecture Search

    Published: 11/06/2025
  11. Sample Complexity and Representation Ability of Test-time Scaling Paradigms

    Published: 11/06/2025
  12. Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators

    Published: 10/06/2025
  13. LLMs Get Lost In Multi-Turn Conversation

    Published: 9/06/2025
  14. PromptPex: Automatic Test Generation for Prompts

    Published: 8/06/2025
  15. General Agents Need World Models

    Published: 8/06/2025
  16. The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models

    Published: 7/06/2025
  17. Decisions With Algorithms

    Published: 7/06/2025
  18. Adapting, fast and slow: Causal Approach to Few-Shot Sequence Learning

    Published: 6/06/2025
  19. Conformal Arbitrage for LLM Objective Balancing

    Published: 6/06/2025
  20. Simulation-Based Inference for Adaptive Experiments

    Published: 6/06/2025

10 / 27

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.

Visit the podcast's native language site