Reducing potential risk from advanced AI systems is an unsolved, difficult task, and the pathways for what is helpful are uncertain. However, here are some candidates for reducing risk:
AI Alignment Research & Eng. Making progress on technical AI alignment (research and engineering)
AI Governance Developing global norms, policies, and institutions to increase the chances that advanced AI is beneficial
Discussion Engaging in discussion about these risks with colleagues
Learn more about AI Alignment The technical fields of AI alignment and AI governance are still in relatively formative stages, making it important to thoroughly understand the theoretical and empirical problems of alignment, and current work in these areas.
Technical AI Alignment Research and Engineering
Overview of the space
There are different subareas and research approaches within the field of AI alignment, and you may be a better fit for some than others.
- Some categories within the field:
- Empirical research and engineering (e.g. Anthropic, FAR AI, Google DeepMind's alignment teams, OpenAI's alignment teams, Center for AI Safety (CAIS), Redwood Research)
- Empirical research and engineering aimed at evaluations of AI capabilities and alignment; technical AI governance work (e.g. ARC Evals, Apollo Research, Palisade Research
- Technical AI governance outside of evaluations, which includes technical standards development (e.g. Anthropic's Responsible Scaling Policy, forecasting, information security, and other work)
- Theoretical research (e.g. Alignment Research Center, MIRI, and many independent researchers.)
- Academia (e.g. UC Berkeley's CHAI, NYU's Alignment Research Group, Jacob Steinhardt, David Krueger, FOCAL and Cooperative AI Foundation) often includes theoretical work and empirical work that doesn't require access to large-scale infrastructure.
- Engineering aimed at AI alignment is almost always in industry, and ML engineering and research engineers are especially in-demand, but there's also a range of engineering roles, especially security, and also software.
- Opportunities in AI Safety and Job Board
- Roles listed on individual organizations' webpages listed above
Guides to getting involved
- Research: FAQ: Advice for AI Alignment Researchers by Rohin Shah (DeepMind) or How to pursue a career in technical AI alignment
- Engineering: Levelling Up in AI Safety Research Engineering
Interested in working in China?
The Center for the Governance of AI (GovAI) describes the AI governance problem as "the problem of devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI" (GovAI Research Agenda). "AI governance" is distinct from "AI policy" in its primary focus on advanced AI — the hypothetical general purpose technology — rather than current AI as it exists today. The field believes that different actions, risks, and opportunities come about when focusing on advanced AI systems as compared to more contemporary issues, though AI governance and AI policy naturally interface with each other and have overlapping domains.
- Read through the AI Governance Curriculum
- Several organizations working in the space: Frontier AI Task Force, Center for the Governance of AI (GovAI), OpenAI's Governance Team, Center for Long-Term Resilience (CLTR), RAND's Technology and Security Policy work, Center for Security and Emerging Technology (CSET)
- If you're interested in a career in US AI policy: Overview by 80,000 Hours and Job Board