We're a group of MIT students working to reduce catastrophic risk from advanced AI.
AI Safety Fundamentals
The main way people get involved with MIT AI Alignment—an 8-week reading group on why AI safety matters and what's being done about it. Covers AI's trajectory, misalignment, technical safety, policy, and careers in the field. Fall and spring run in our office with dinner included, and summer is virtual. Open to anyone, with preference for MIT undergrad and grad students.
8 weeks, 2 hours per week
Free food at sessions
Small groups led by MAIA facilitators
No prior AI background required