About

Arkose is an early-stage, field-building nonprofit with the mission of improving the safety of advanced AI systems. Arkose focuses on supporting researchers, engineers, and other professionals interested in contributing to AI safety. We do this by:

  • Hosting calls and other support programs that provide connections, mentorship, resources, and tailored guidance to researchers, engineers, and others with relevant skill sets to facilitate their engagement in AI safety technical research. Machine learning professionals are invited to request a call.
  • Advancing understanding of AI safety research and opportunities via outreach and curating educational resources. See our Resources, which are periodically reviewed by an advisory panel of experts in the field.

Previous Work

AI Risk Discussions

Arkose's older 2022 work aimed to facilitate discussion and evaluation of potential risks from advanced AI, with a focus on soliciting and engaging with expert perspectives on the arguments and providing resources for stakeholders. Our results, based on a set of 97 interviews with AI researchers on their perspectives on current AI and the future of AI (pre-ChatGPT era), can be found below.


Interviews

One of our main goals was to facilitate conversations between those concerned about potential risks from advanced AI systems and technical experts. To that end, we conducted 97 interviews with AI researchers on their perspectives on current AI and the future of AI, with a focus on risks from advanced systems. This collection of interviews includes anonymized transcripts, quantitative analysis of the most common perspectives, and an academic talk discussing preliminary findings.

Interactive Walkthrough

In our interviews with AI researchers, some of the core questions focused on risks from advanced AI systems. To explore the interview questions, common responses from AI researchers, and potential counterarguments, we created an interactive walkthrough. You are encouraged to explore your own perspectives, and at the conclusion your series of agreements or disagreements will be displayed, so that you can compare your perspectives to other users' of the site.

Project contributors

AI Risk Discussions was led by Dr. Vael Gates, with many other contributors, most prominently Lukas Trötzmüller (interactive walkthrough), Maheen Shermohammed (quantitative analysis), Zi Cheng (Sam) Huang (interview tagging), and Michael Keenan (website development).