Arkose is an early-stage, field-building nonprofit with the mission of improving the safety of advanced AI systems to reduce potential large-scale risks. To this end, Arkose focuses on supporting researchers, engineers, and other professionals interested in contributing to AI safety. We do this by:
Arkose's older 2022 work aimed to facilitate discussion and evaluation of potential risks from advanced AI, with a focus on soliciting and engaging with expert perspectives on the arguments and providing resources for stakeholders. Our results, based on a set of 97 interviews with AI researchers on their perspectives on current AI and the future of AI (pre-ChatGPT era), can be found below.
One of our main goals was to facilitate conversations between those concerned about potential risks from advanced AI systems and technical experts. To that end, we conducted 97 interviews with AI researchers on their perspectives on current AI and the future of AI, with a focus on risks from advanced systems. This collection of interviews includes anonymized transcripts, quantitative analysis of the most common perspectives, and an academic talk discussing preliminary findings.
In our interviews with AI researchers, some of the core questions focused on risks from advanced AI systems. To explore the interview questions, common responses from AI researchers, and potential counterarguments, we created an interactive walkthrough. You are encouraged to explore your own perspectives, and at the conclusion your series of agreements or disagreements will be displayed, so that you can compare your perspectives to other users' of the site.
AI Risk Discussions was led by Dr. Vael Gates, with many other contributors, most prominently Lukas Trötzmüller (interactive walkthrough), Maheen Shermohammed (quantitative analysis), Zi Cheng (Sam) Huang (interview tagging), and Michael Keenan (website development).