Arkose is a nonprofit with the mission of improving the safety of advanced AI systems to reduce potential large-scale risks. Our 30-minute calls help support machine learning professionals interested in AI safety research or engineering. We offer personalized advice to fit your level of interest and situation, whether you're an industry researcher or engineer, professor, or PhD student.

After discussing your specific needs and questions, we can:

  • Help you explore funding and job opportunities. Arkose keeps a list of grant opportunities, compute options, fellowships, and jobs: Resources is our public list.
  • Connect you to potential mentors and collaborators in industry or academia. Whether you're seeking senior researchers, PhD students, or industry or sabbatical connections, Arkose can introduce you to our experts.
  • Recommend a list of safety-related papers in your research area, and a shortlist of advanced AI safety materials.
  • Provide as-needed support for 6 months. We can help you clarify emerging options, follow up with your plans, and advise you of new resources.
Request a Call

People we speak with work at:

Google
Stanford
MIT
Carnegie Mellon University
UC Berkeley
University of Oxford
University of Hong Kong
IBM
Cruise

Testimonials

  • AI Safety/Alignment is an active field with diverse perspectives. The Arkose website has a resourceful collection of some of the most relevant papers, jobs, companies and grants, which I found very handy. I gained much clarity when I interacted with the Arkose team over a call and debated on some of the safety issues. I would highly recommend the supportive Arkose team for anyone hoping to enter the field.
    — Postdoctoral Researcher at the University of Cambridge
  • Vael had an excellent understanding of the alignment landscape, and useful insight into the skills and background required for working at top AI safety labs. It was really helpful to understand where I needed to be, and Vael offered lots of suggestions to help me get there, including connecting me to people. Advice was well-tailored towards mid/late career ML professionals, who may not be as suited to commonly suggested entry-routes into technical AI safety research.
    — Salman Mohammadi, AI Research Engineer
  • Once I realized that my research vision closely resembles that of the AI alignment and safety community, I faced the substantial task of understanding that community's distinct culture and identifying related funding opportunities. Over a few calls and emails with Vael at Arkose, I quickly gained an initial map of AI alignment and safety as well as crucial guidance on its navigation. The downstream impact has been immense.
    — Brad Knox, Research Associate Professor of Computer Science
  • I'm new to the ecosystem, and Vael has been extremely valuable in connecting me with key people, informing my general approach to strategy, and becoming more efficient and effective at AI safety research. They enabled me to really make the most of my in-person visit to the AI safety community in Berkeley: I've been able to get in touch with some key people and come up with resources for the AI safety research that I'm doing, which has significantly accelerated my research efforts. Vael's doing a fantastic job of tying in people of the wider community and I'm very grateful for their services.
    — Christian Schroeder de Witt, Postdoctoral Researcher at the University of Oxford
  • As someone who is relatively new to alignment research, the call with Arkose was more than worth the time. Loads of useful pointers, honest feedback and an overall pleasant call experience. Definitely helped me to refine my plan to transition into the field!
    - Anonymous
  • My call with Vael Gates from Arkose was enlightening and profoundly helpful, providing me with access to resources I was previously unaware of. Their dedication to offering personalized support has immensely enhanced my AI safety research. Arkose's cool and invaluable contribution is a cornerstone for the community's thriving success, positioning them as a trusted ally for student researchers like myself.
    — Yi Zeng, PhD Student at Virginia Tech