Q3 2025 Mentors

  • Alan Chan

    Governance of AI Agents
    GovAI

    Find Out More

  • Herbie Bradley

    Implications of a Highly Automated Economy
    University of Cambridge

    Find Out More

  • Isabella Duan

    International Coordination on AI Risks
    Safe AI Forum

    Find Out More

  • Stefan Heimersheim

    Mechanistic Interpretability
    Apollo Research

    Find Out More

  • Joshua Clymer

    AI Control
    Redwood Research

    Find Out More

  • Eli Lifland

    AI Forecasting, Governance and Strategy
    AI Futures Project

    Find Out More

  • Tobin South

    Trustworthy AI infrastructure
    MIT & Stanford

    Find Out More

  • Prof. Alexander Strang

    Alexander Strang

    Inductive Bias in LLMs
    University of California Berkeley

    Find Out More

  • Jesse Hoogland

    Developmental Interpretability
    Timaeus

    Find Out More

  • Ben Bucknall

    Technical AI Governance
    University of Oxford (AIGI)

    Find Out More

  • Jacob Lagerros

    AI Hardware Security

    Find Out More

  • Erich Grunewald

    US export controls
    IAPS

    Find Out More

  • Jasper Götting

    AI & Biosecurity Intersection
    SecureBio

    Find Out More

  • Peter Barnett

    Technical AI Governance
    Machine Intelligence Research Institute

    Find Out More

  • Thomas Larsen

    AI Forecasting, Governance and Strategy
    AI Futures Project

    Find Out More

  • Logan Riggs Smith

    Logan Riggs Smith

    Mechanistic Interpretability

    Find Out More

  • Alexander Gietelink Oldenziel

    Inductive Bias in LLMs
    Timaeus

    Find Out More

  • Tyler Tracy

    AI Control
    Redwood Research

    Find Out More

  • Lewis Hammond

    Lewis Hammond

    Multi-Agent Safety
    Cooperative AI Foundation

    Find Out More