*This application has closed*

AI Safety Call Specialist (9-month fixed term)

Summary

Arkose, an early-stage nonprofit focused on AI safety field-building, is seeking a full-time AI Safety Call Specialist to join our team remotely. This position is central to Arkose's field-building work, as you'll be directly speaking with and supporting professors, PhD students, and industry professionals who are interested in AI safety research or engineering.

This independent contractor role offers a yearly pay rate of $75,000 - $95,000, depending on location and experience. The role is for a 9-month term, with likely extension if we secure organizational funding for 2025.

If you're looking for somewhere you can help build the field of people reducing risks from advanced AI, learn quickly, and take ownership for Arkose's core one-on-one call work, please consider applying.

The application deadline is March 30 with rolling admission (early applications encouraged).

Apply here

About Arkose

Arkose is an early-stage, field-building nonprofit with the mission of improving the safety of advanced AI systems to reduce potential large-scale risks. To this end, Arkose focuses on supporting researchers, engineers, and other professionals interested in contributing to AI safety. In particular, we offer 30-minute calls to help support professors, PhD students, and industry professionals who are pursuing or interested in AI safety research or engineering. On these calls, we speak with professionals about their needs and questions, and provide informational resources about opportunities (e.g. job, funding, compute, and other resources) in the space. Alongside these calls, we host a public list of Arkose Resources, and a list of safety-related papers in different research areas.

Arkose is led by founder Vael Gates with two operations team members, and is supported by the Survival and Flourishing Fund.

About the role

We are seeking a full-time AI Safety Call Specialist to join our remote team. This position is central to Arkose's field-building work, as you'll be directly speaking with and supporting professors, PhD students, and industry professionals who are interested in AI safety research or engineering.

Key responsibilities

  • You'll speak with professors, PhD students, and industry professionals in 30-minute calls, several times a day. You'll learn about professionals' needs, speak about the AI safety research landscape and relevant opportunities, and connect individuals to potential collaborators and experts.
  • You'll communicate with professionals via email and effectively handle individual requests and inquiries. In particular, you'll complete follow-ups for each call, including introducing professionals to other potential collaborators and experts, send follow-up messages, and investigate whether we can support nonstandard requests with subsequent follow-up.
  • You'll continually learn about the AI safety landscape (both research and opportunities), spot opportunities to and improve Arkose's offerings, and grow and maintain Arkose's expert network.
  • You'll work with the Arkose team (your manager Vael and the operations team) to ensure that your and the professionals' needs are being met. We aim to support you throughout your skill acquisition and growth!

Location, pay, duration, and benefits

  • This is a full-time remote position, with international applicants welcome, but we highly prefer candidates to be able to work on roughly US time zones.
  • The yearly pay range is $75,000 - $95,000, depending on location and experience.
  • This is a fixed-term position for 9 months, with likely extension if the organization continues (we haven't yet secured organizational funding for 2025).
  • This is an independent contractor role. You will not receive benefits, but will receive unlimited PTO.
  • Arkose is currently in a period of potential organizational transition, such that we may be absorbed into a larger AI safety organization, or Vael may be working part-time at Arkose. We expect this position will remain as specified within this job description, and are happy to speak about this at any stage of the application process.
  • If these or any of the application details are dealbreakers, please still apply and note these in your application - there is some room for negotiation.

About you

You might be a great fit for this role if:

  • Talking to talented, mid-career AI professionals about how to support them sounds like fun! Your primary work will be talking to people (and the associated follow-ups for support), so we're looking to people who'd enjoy such a venture in the long-term.
  • You have strong "people skills" including:
    • You expect to enjoy talking to smart, driven mid-career professionals.
    • You feel comfortable exploring different perspectives, while simultaneously being capable of arguing for positions you support.
    • You're able to engage in a variety of discussions with professionalism and positivity, emphasizing curiosity, supportiveness, and prioritizing key conversational aims.
  • You either have, or have an interest in developing, AI safety knowledge, including:
    • You like learning about AI safety research! Ideally, you can easily skim abstracts of relevant research papers and communicate the main ideas to technical professionals.
    • You're interested in learning about the AI safety landscape, including its organizations and opportunities, and pursue means to keep up to date.
  • You're comfortable with stakeholder management, and can communicate clearly and appropriately with different groups of experts, verbally and in writing.
  • You're conscientious, and can easily keep track of different follow-up tasks amidst various individual calls.
  • You can communicate straightforwardly with Vael and the rest of the team, openly discussing any uncertainties, needs, or suggestions for improvement. As a remote team with a lot of Slack communication, our team culture is oriented around relatively direct communication with fast feedback loops alongside appreciation of each others' work. We want the Arkose work environment to continue to feel like a welcoming and productive space!
  • You're self-motivated in a remote role, and feel excited to continually learn on the job, with the flexibility that comes from being in an early-stage start-up environment. You feel comfortable owning tasks and being in an independent role with high-level responsibilities.

Application

The current deadline for this application is March 30, and we will be assessing candidates on a rolling basis. We're excited to hire someone to start as soon as possible, and if we find a great candidate in early applications, will close the hiring round early. Early applications are thus encouraged!

The application process itself may vary by candidate but for successful candidates will typically involve:

  • An initial application
  • A 30 - minute interview
  • A 60 - minute interview
  • A 2 - day paid work trial

You can expect that the application process will be very similar to the work itself. For example, you'll be asked to draft what you might say to an example professional, you'll roleplay as a call specialist while Vael plays an AI professional, and at later stages you may talk to an AI professional.

Preparing for the interviews

You are highly encouraged to prepare for this role before you reach the interview stage. While we expect a substantial amount of training and learning within the role, it is still very useful if candidates come with existing knowledge, or the knowledge that they like learning about the AI safety landscape. To prepare, you are encouraged to:

  • Read through the Arkose Resources page, specifically the Opportunities section, to become familiar with the AI safety space. Ideally, by the time of the interview, you're not only familiar with the organizations and opportunities listed, but are familiar with the approximate layout of the page, such that you could point to relevant parts of this page on a call.
  • Have a high-level overview of the AI safety research space, e.g. by skimming through some of the resources on the AI safety section of the Resources page. You definitely do not need to be an expert, but it would be ideal if you could give a one-sentence description of e.g. what scalable oversight is, and why it's relevant to AI safety.

Diversity and inclusion

We're aware that factors like gender, race, and socioeconomic background can affect people's willingness to apply for roles for which they meet some but not all the suggested attributes. We'd especially like to encourage people from underrepresented backgrounds to express interest.

There's no such thing as a "perfect" candidate. If you're on the fence about applying because you're unsure whether you're qualified, we'd encourage you to apply.

Apply here

If you have questions about the role, please reach out to team@arkose.org!