Learn more context on large-scale risks from advanced AI

These opportunities focus on AI safety work aimed at preventing loss of human control of very capable AI systems. To maximize your eligibility for these opportunities, we recommend gaining context on the perspectives of this subfield, e.g. by skimming pertinent AI safety papers.

AI Safety Papers

Opportunities

Job Opportunities (Research and Engineering)

Funding Opportunities

    • Constellation is offering 3–6 month extended visits (unpaid) at their office (Berkeley, CA) for researchers, engineers, entrepreneurs, and other professionals working on their focus areasApply here by April 30 (or by April 12 if you would like to collaborate with a research advisor). See here for more details.
    • Constellation is offering year-long salaried positions ($100K-$180K) at their office (Berkeley, CA) for experienced researchers, engineers, entrepreneurs, and other professionals to pursue self-directed work on one of Constellation's focus areasApply here by April 30. See here for more details.
    • The ML Alignment & Theory Scholars (MATS) Program is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment and safety. We also connect them with the Berkeley alignment research community. Our Winter Program will run from early Jan, 2025. Apply here.