AI Risk Discussions

AI Risk Discussions (2022) was a project of Arkose aimed at facilitating discussion and evaluation of potential risks from advanced AI, with a focus on soliciting and engaging with expert perspectives on the arguments.

Interviews

One of our main goals was to facilitate conversations between those concerned about potential risks from advanced AI systems and technical experts. To that end, we conducted 97 interviews with AI researchers on their perspectives on current AI and the future of AI, with a focus on risks from advanced systems. This collection of interviews includes anonymized transcripts, quantitative analysis of the most common perspectives, and an academic talk discussing preliminary findings.

Interactive Walkthrough

In our interviews with AI researchers, some of the core questions focused on risks from advanced AI systems. To explore the interview questions, common responses from AI researchers, and potential counterarguments, we created an interactive walkthrough. You are encouraged to explore your own perspectives, and at the conclusion your series of agreements or disagreements will be displayed, so that you can compare your perspectives to other users' of the site.


Resource Center and Getting Involved

Interested in learning more? Our Resource Center has further reading, both for machine learning researchers as well as for the general public.

Concerned about potential risks from advanced AI systems? We have recommendations for what you can do to help. In particular, work in technical research on AI alignment is especially needed.


Project contributors

AI Risk Discussions (AIRD) is a project of Arkose. It was led by Dr. Vael Gates, with many other contributors, most prominently Lukas Trötzmüller (interactive walkthrough), Maheen Shermohammed (quantitative analysis), Zi Cheng (Sam) Huang (interview tagging), and Michael Keenan (website development).