What can I do?

Reducing potential risk from advanced AI systems is an unsolved, difficult task, and the pathways for what is helpful are uncertain. However, here are some candidates for reducing risk:

AI Alignment Research & Eng. Making progress on technical AI alignment (research and engineering)

AI Governance Developing global norms, policies, and institutions to increase the chances that advanced AI is beneficial

Support Providing support for others working in AI alignment (e.g. operations roles)

Discussion Engaging in discussion about these risks with colleagues

Learn more about AI Alignment The technical fields of AI alignment and AI governance are still in relatively formative stages, making it important to thoroughly understand the theoretical and empirical problems of alignment, and current work in these areas.

Technical AI Alignment Research and Engineering

Overview of the space

There are different subareas and research approaches within the field of AI alignment, and you may be a better fit for some than others.

Funding sources


Guides to getting involved

Interested in working in China?

AI Governance

The Center for the Governance of AI (GovAI) describes the AI governance problem as "the problem of devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI" (GovAI Research Agenda). "AI governance" is distinct from "AI policy" in its primary focus on advanced AI — the hypothetical general purpose technology — rather than current AI as it exists today. The field believes that different actions, risks, and opportunities come about when focusing on advanced AI systems as compared to more contemporary issues, though AI governance and AI policy naturally interface with each other and have overlapping domains.