Q3 2025 Mentors
-
Alan Chan
Governance of AI Agents
GovAI -
Herbie Bradley
Implications of a Highly Automated Economy
University of Cambridge -
Isabella Duan
International Coordination on AI Risks
Safe AI Forum -
Stefan Heimersheim
Mechanistic Interpretability
Apollo Research -
Joshua Clymer
AI Control
Redwood Research -
Eli Lifland
AI Forecasting, Governance and Strategy
AI Futures Project -
Tobin South
Trustworthy AI infrastructure
MIT & Stanford -
Alexander Strang
Inductive Bias in LLMs
University of California Berkeley -
Jesse Hoogland
Developmental Interpretability
Timaeus -
Ben Bucknall
Technical AI Governance
University of Oxford (AIGI) -
Jacob Lagerros
AI Hardware Security
-
Erich Grunewald
US export controls
IAPS -
Jasper Götting
AI & Biosecurity Intersection
SecureBio -
Peter Barnett
Technical AI Governance
Machine Intelligence Research Institute -
Thomas Larsen
AI Forecasting, Governance and Strategy
AI Futures Project -
Logan Riggs Smith
Mechanistic Interpretability
-
Alexander Gietelink Oldenziel
Inductive Bias in LLMs
Timaeus -
Tyler Tracy
AI Control
Redwood Research -
Lewis Hammond
Multi-Agent Safety
Cooperative AI Foundation
Find Out More