Arkose is a nonprofit with the mission of improving the safety of advanced AI systems to reduce potential large-scale risks. We aim to support the growing field of AI safety, which needs skilled researchers and engineers. If you’re interested in exploring opportunities, we can help you:

  • Find funding and compute
  • Learn about job openings
  • Connect with senior researchers in industry and academia

During our 30-minute calls, we offer personalized advice to fit your level of interest and situation. We've talked with professors, PhD students, and industry professionals at places like Google and Stanford University, and can introduce you to experts at the top safety labs and teams. We’re a nonprofit and our help is always free.

Want to know more?

  • Arkose keeps a list of grant opportunities, jobs, compute options and more: Opportunities is our public list.
  • Our list of AI safety papers provides a short introduction to research on reducing large-scale risks from advanced AI, providing context for many of the listed opportunities.
  • Our database of AI Safety Professionals allows you to filter by institution and role or to find people who are hiring for specific academic roles (e.g. postdoctoral researchers).
Request a Call

Want to stay up to date?

Arkose keeps an eye out for the best opportunities to get involved (whether that’s funding, jobs, compute, or other relevant opportunities) and collects them into a regular newsletter, direct to your inbox.

People we speak with work at:

Google
Stanford
MIT
Carnegie Mellon University
UC Berkeley
University of Oxford
University of Hong Kong
IBM
Cruise

Testimonials

  • AI Safety/Alignment is an active field with diverse perspectives. The Arkose website has a resourceful collection of some of the most relevant papers, jobs, companies and grants, which I found very handy. I gained much clarity when I interacted with the Arkose team over a call and debated on some of the safety issues. I would highly recommend the supportive Arkose team for anyone hoping to enter the field.
    — Postdoctoral Researcher at the University of Cambridge
  • Vael had an excellent understanding of the alignment landscape, and useful insight into the skills and background required for working at top AI safety labs. It was really helpful to understand where I needed to be, and Vael offered lots of suggestions to help me get there, including connecting me to people. Advice was well-tailored towards mid/late career ML professionals, who may not be as suited to commonly suggested entry-routes into technical AI safety research.
    — Salman Mohammadi, AI Research Engineer
  • Once I realized that my research vision closely resembles that of the AI alignment and safety community, I faced the substantial task of understanding that community's distinct culture and identifying related funding opportunities. Over a few calls and emails with Vael at Arkose, I quickly gained an initial map of AI alignment and safety as well as crucial guidance on its navigation. The downstream impact has been immense.
    — Brad Knox, Research Associate Professor of Computer Science
  • I appreciated the amount of understanding Vael had of the whole field and current situation; Vael felt on top of things. It was cool to see Vael's ability to personalize advice for me given the rapid pace that they were getting information and pulling out what was relevant. The call was more than I expected - the most useful part was the Quickstart guide, though much of it was useful, and the amount of advice seemed fantastic.
    — Zhen Ning David Liu, PhD student at the University of Cambridge
  • I'm new to the ecosystem, and Vael has been extremely valuable in connecting me with key people, informing my general approach to strategy, and becoming more efficient and effective at AI safety research. They enabled me to really make the most of my in-person visit to the AI safety community in Berkeley: I've been able to get in touch with some key people and come up with resources for the AI safety research that I'm doing, which has significantly accelerated my research efforts. Vael's doing a fantastic job of tying in people of the wider community and I'm very grateful for their services.
    — Christian Schroeder de Witt, Postdoctoral Researcher at the University of Oxford
  • As someone who is relatively new to alignment research, the call with Arkose was more than worth the time. Loads of useful pointers, honest feedback and an overall pleasant call experience. Definitely helped me to refine my plan to transition into the field!
    - Anonymous
  • After a call with Vael, I feel considerably better prepared for my planned career change into AI safety. Vael had a clear view of the whole field and the possible routes into research, and was able to talk in a lot of detail about the routes I was most interested in but also suggest new ones that I had not considered. Best of all, they helped me to put together a series of actionable steps to make progress with my career change. I strongly recommend applying to speak with Arkose if you are looking to move into the field.
    — Hannes Whittingham, ML Scientist
  • It was really interesting and helpful talking with [Vael]. Thank you very much for your time, input, and also the resources. Having the chance to talk with a like-minded, knowledgeable person that can think outside the box and also shares my concerns was a great experience. The provided links are also very helpful, and I plan to spend significant time processing them. It was also a good opportunity to discuss my own ideas and get feedback on them.
    — Wolfgang Slany, Professor at Graz University of Technology
  • My call with Vael Gates from Arkose was enlightening and profoundly helpful, providing me with access to resources I was previously unaware of. Their dedication to offering personalized support has immensely enhanced my AI safety research. Arkose's cool and invaluable contribution is a cornerstone for the community's thriving success, positioning them as a trusted ally for student researchers like myself.
    — Yi Zeng, PhD Student at Virginia Tech
  • Arkose and Vael have done a great job of curating funding sources and compute resources that are beneficial for early career academicians interested in AI Safety research. Additionally, the opportunity to connect with researchers interested in the same problems is a big plus!
    — Anshuman Chhabra, Postdoctoral Researcher at UC Davis