We're a group of MIT students working to reduce catastrophic risk from advanced AI.

Reducing risks from advanced artificial intelligence may be one of the most important challenges of our time. And one where real progress is possible.

MAIA supports undergraduate and graduate students contributing to that progress.

Flagship program

AI Safety Fundamentals

The main way people get involved with MIT AI Alignment—an 8-week reading group on why AI safety matters and what's being done about it. Covers AI's trajectory, misalignment, technical safety, policy, and careers in the field. Fall and spring run in our office with dinner included, and summer is virtual. Open to anyone, with preference for MIT undergrad and grad students.

8 weeks, 2 hours per week
Free food at sessions
Small groups led by MAIA facilitators
No prior AI background required