Learn more context on large-scale risks from advanced AI
These opportunities focus on AI safety work aimed at preventing loss of human control of very capable AI systems. To maximize your eligibility for these opportunities, we recommend gaining context on the perspectives of this subfield, e.g. by skimming pertinent AI safety papers.
AI Safety PapersOpportunities
Last updated: 12/03/24
Job Opportunities
- Frontier Red Team
- Responsible Scaling Policy Team
- Alignment Stress Testing Team
- Interpretability Team
- Dangerous Capability Evaluations Team
- Assurance Team
- Security Team
- AI Safety Fellowship (deadline: 01/20/24)
- AI Safety and Alignment Team (Bay Area)
- Scalable Alignment Team
- Frontier Model and Governance Team
- Mechanistic Interpretability Team
- Responsibility & Safety Team
- In particular, the Autonomous Systems Team's Engineering Residency (though note that this role has some citizenship restrictions)
- Note that RAND's Technology and Security Policy Fellowship is not just for policy research; ML engineers, software engineers with either infrastructure or front-end experience, and technical program managers are also encouraged to apply via this Fellowship.
- FAR AI (roles)
- Model Evaluations and Threat Research (METR) (roles)
- Apollo Research (roles)
- You can use the filtered view of our database to find professors with open positions of any seniority, or the unfiltered view to find potential collaborators.
- We'd also like to highlight the Future of Life Institute's funding for Technical Postdoctoral Fellowships and the Centre for Human-Compatible AI's Research Fellowship and Research Collaborator positions.
Funding Opportunities
- Request for proposals: AI governance (In the "technical governance" section, examples include: compute governance, model evaluations, technical safety and security standards for AI developers, cybersecurity for model weights, and privacy-preserving transparency mechanisms. See also the Governance and Policy section below)
- Career development and transition funding
- Course development grants
- Funding for work that builds capacity to address risks from transformative AI
- Future of Life Institute Postdoctoral Fellowships (deadline: 01/06/25)
- AI Safety Fund: RFP for Bio- and Cyber- Security and AI (deadline: 01/20/25)
- SafeBench Competition (deadline: 2/25/2025; $250k in prizes)
- NSF Secure and Trustworthy Cyberspace Grants
- Foresight Institute: Grants for Security, Cryptography & Multipolar Approaches to AI Safety (quarterly applications)
- Long-Term Future Fund
- Anthropic Model Evaluation Initiative (accepting EOIs for their next round)
- Safeguarded AI aims to provide quantitative safety guarantees for AI. Their current funding round is for demonstrations that AI systems with such guarantees are useful and profitable in safety-critical contexts (e.g. optimising energy networks, clinical trials, or telecommunications).
- Cooperative AI Foundation Concordia Contest 2024
- Future of Life Institute: PhD Fellowships
- Future of Life Institute: How to Mitigate AI-driven Power Concentration
- Schmidt Sciences: Safety Assurance through Fundamental Science in Emerging AI
- Cooperative AI Foundation Research Grants
- Open Philanthropy Request for proposals: benchmarking LLM agents on consequential real-world tasks
- Note: SFF gives grants to universities. Alternatively, SFF requires that you have a 501c3 charity (i.e. your nonprofit has 501c3 status or you have a fiscal sponsor that has 501c3 status).
- Call for Research Ideas: Expanding the Toolkit for Frontier Model Releases from CSET
- OpenAI: Research into Agentic AI Systems, Superalignment Fast Grants, OpenAI Cybersecurity Grants (assumed closed)
- NSF: Safe Learning-Enabled Systems and Responsible Design, Development, and Deployment of Technologies
- Center for Security and Emerging Technology (CSET): Foundational Research Grants
- National Deep Inference Fabric (NDIF), can request early access to a research computing project for interpretability research
- Cohere for AI, subsidized access to APIs
- Anthropic AI Safety Fellow (6 months, San Francisco or London. Deadline: 01/20/25)
- Academic Engagement: research collaborations and workshops targeted at academics.
- 6-month residency with the autonomous systems team (note that this has role some citizenship restrictions).
- Visiting Fellows: a 3-6 month (unpaid) visit at the Constellation office (Berkeley, CA) for researchers, engineers, entrepreneurs, and other professionals working on their focus areas. Applications open for the winter cohort (beginning January 6th).
- Residencies: a year-long salaried position ($100K-$300K) for experienced researchers, engineers, entrepreneurs, and other professionals to pursue self-directed work on one of Constellation's focus areas in the Constellation office (Berkeley, CA).
- Workshops: Constellation also expect to offer 1-2 day intensive workshops for experts working in or transitioning into their focus areas.
- Impact Academy's Global AI Safety Fellowship (deadline: 12/31/24)
- Supervised Program for Alignment Research (SPAR) Spring Program (expression of interest)
- Socially Responsible Language Modelling Research (SoLaR)
- Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI
- Red Teaming GenAI: What Can We Learn from Adversaries?
- Interpretable AI: Past, Present and Future
- Safe Generative AI
- Expression of Interest form for FAR AI's Alignment Workshop. Recordings from the previous workshop are also available on the website.
- Expression of Interest form for Constellation Workshops. Constellation expects to offer 1–2 day intensive workshops for people working in or transitioning into their focus areas.
- ML Safety Social hosted by the Center for AI Safety
- Secure and Trustworthy Large Language Models
- How Far Are We From AGI?
- Reliable and Responsible Foundation Models
- ME-FoMo: Mathematical and Empirical Understanding of Foundation Models
- New Orleans Alignment Workshop (Dec 2023), recordings available
- San Francisco Alignment Workshop 2023 (Feb 2023), recordings available
- Neural Scaling & Alignment: Towards Maximally Beneficial AGI Workshop Series (2021-2023)
- Human-Level AI: Possibilities, Challenges, and Societal Implications (June 2023)
- Workshop on AI Scaling and its Implications (Oct 2023)
- Arkose's Database of AI Safety Professionals
- AI Existential Safety Community from Future of Life Institute
- See speakers from the Alignment Workshop series (SF 2023, NOLA 2023)
- AISafety.com's List of AI Safety Communities
- Information Security roles
- Overview from a security engineer at Google
- Jason Clinton's recommended upskilling book
- Forecasting (see especially Epoch)
- Software Engineering
- The Horizon Fellowship places experts in emerging technologies in federal agencies, congressional offices, and thinktanks in Washington DC for up to two years.
- The Future of Life Institute is looking to hire someone with experience in both hardware engineering and project management to lead a new initiative in technical AI governance.
- The series is designed to help individuals interested in federal AI and biosecurity policy decide if they should pursue careers in these fields. Each session features experienced policy practitioners who will discuss what it’s like to work in emerging technology policy and provide actionable advice on how to get involved. Some of the sessions will be useful for individuals from all fields and career stages, while others are focused on particular backgrounds and opportunities. You may choose to attend all or only some of the sessions.
- AI Governance Curriculum by BlueDot Impact
- AI Policy Resources by Emerging Technology Policy Careers
- Center for Long-Term Resilience (CLTR)
- RAND's Technology and Security Policy work
- Horizon Institute for Public Service
- Institute for AI Policy and Strategy
- Center for Security and Emerging Technology (CSET)
- Frontier AI Task Force
- Center for the Governance of AI (GovAI)
- Industry AI Governance teams
- Center for AI Policy
Compute Opportunities
AI Safety Programs / Fellowships / Residencies / Collaborations
Workshops and Community
Job Board
Filtered from the 80,000 Hours Job Board
Alternative Technical Opportunities
AI governance is focused on developing global norms, policies, and institutions to increase the chances that advanced AI is beneficial for humanity.