36 Episodes

  1. Drew Cukor - AI Adoption as a National Security Priority (US-China AGI Relations, Episode 3)

    Published: 19/09/2025
  2. Stuart Russell - Avoiding the Cliff of Uncontrollable AI (AGI Governance, Episode 9)

    Published: 12/09/2025
  3. Craig Mundie - Co-Evolution with AI: Industry First, Regulators Later (AGI Governance, Episode 8)

    Published: 5/09/2025
  4. Jeremie and Edouard Harris - What Makes US-China Alignment Around AGI So Hard (US-China AGI Relations, Episode 2)

    Published: 29/08/2025
  5. Ed Boyden - Neurobiology as a Bridge to a Worthy Successor (Worthy Successor, Episode 13)

    Published: 22/08/2025
  6. Roman Yampolskiy - The Blacker the Box, the Bigger the Risk (Early Experience of AGI, Episode 3)

    Published: 15/08/2025
  7. Toby Ord - Crucial Updates on the Evolving AGI Risk Landscape (AGI Governance, Episode 7)

    Published: 12/08/2025
  8. Martin Rees - If They’re Conscious, We Should Step Aside (Worthy Successor, Episode 12)

    Published: 1/08/2025
  9. Emmett Shear - AGI as "Another Kind of Cell" in the Tissue of Life (Worthy Successor, Episode 11)

    Published: 18/07/2025
  10. Joshua Clymer - Where Human Civilization Might Crumble First (Early Experience of AGI - Episode 2)

    Published: 4/07/2025
  11. Peter Singer - Optimizing the Future for Joy, and the Exploration of the Good [Worthy Successor, Episode 10]

    Published: 20/06/2025
  12. David Duvenaud - What are Humans Even Good For in Five Years? [Early Experience of AGI - Episode 1]

    Published: 6/06/2025
  13. Kristian Rönn - A Blissful Successor Beyond Darwinian Life [Worthy Successor, Episode 9]

    Published: 23/05/2025
  14. Jack Shanahan - Avoiding an AI Race While Keeping America Strong [US-China AGI Relations, Episode 1]

    Published: 9/05/2025
  15. Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]

    Published: 25/04/2025
  16. Yi Zeng - Exploring 'Virtue' and Goodness Through Posthuman Minds [AI Safety Connect, Episode 2]

    Published: 11/04/2025
  17. Max Tegmark - The Lynchpin Factors to Achieving AGI Governance [AI Safety Connect, Episode 1]

    Published: 28/03/2025
  18. Michael Levin - Unfolding New Paradigms of Posthuman Intelligence [Worthy Successor, Episode 7]

    Published: 14/03/2025
  19. Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]

    Published: 24/01/2025
  20. Connor Leahy - Slamming the Brakes on the AGI Arms Race [AGI Governance, Episode 5]

    Published: 10/01/2025

1 / 2

What should be the trajectory of intelligence beyond humanity?The Trajectory pull covers realpolitik on artificial general intelligence and the posthuman transition - by asking tech, policy, and AI research leaders the hard questions about what's after man, and how we should define and create a worthy successor (danfaggella.com/worthy). Hosted by Daniel Faggella.

Visit the podcast's native language site