Resources

Opportunities and AI Safety Content relevant to improving the safety of advanced AI systems to reduce potential large-scale risks.

Opportunities

Job Opportunities (Research and Engineering)

Funding Opportunities

Compute Opportunities

AI Safety Programs / Fellowships / Residencies

    • Constellation is offering 3–6 month extended visits at their office (Berkeley, CA) for researchers, engineers, entrepreneurs, and other professionals working on their focus areasApply here by April 30 (or by April 12 if you would like to collaborate with a research advisor). See here for more details.
    • Constellation is offering year-long salaried positions ($100K-$180K) at their office (Berkeley, CA) for experienced researchers, engineers, entrepreneurs, and other professionals to pursue self-directed work on one of Constellation's focus areasApply here by April 30. See here for more details.
      • The ML Alignment & Theory Scholars (MATS) Program is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment and safety. We also connect them with the Berkeley alignment research community. Our Summer Program will run Jun 17-Aug 23, 2024 and our Winter Program will run from early Jan, 2025. Apply here.

Workshops and Community

Job Board

Filtered from the 80,000 Hours Job Board

AI Safety Content

AI safety content relevant to improving the safety of advanced AI systems to reduce potential large-scale risks.

Selected Papers

Suggest A Resource

Expanded List of Papers

AI Safety Papers

Blog posts

Highlighted Talk Series

Newsletters

Upskilling