Introduction

Overview

The following is a quantitative analysis of 97 interviews conducted in Feb-March 2022 with machine learning researchers, who were asked about their perceptions of artificial intelligence (AI) now and in the future, with particular focus on risks from advanced AI systems (imprecisely labeled “AGI” for brevity in the rest of this document). Of the interviewees, 92 were selected from NeurIPS or ICML 2021 submissions and 5 were outside recommendations. For each interviewee, a transcript was generated, and common responses were identified and tagged to support quantitative analysis. The transcripts, as well as a qualitative walkthrough of the interviews are available at Interviews.

Findings Summary

Some key findings from our primary questions of interest (not discussing Demographics or “Split-By” subquestions):

  • Most participants (75%), at some point in the conversation, said that they thought humanity would achieve advanced AI (imprecisely labeled “AGI” for the rest of this summary) eventually, but their timelines to AGI varied (source). Within this group:
    • 32% thought it would happen in 0-50 years
    • 40% thought 50-200 years
    • 18% thought 200+ years
    • and 28% were quite uncertain, reporting a very wide range.
    • (These sum to more than 100% because several people endorsed multiple timelines over the course of the conversation.)
  • Among participants who thought humanity would never develop AGI (22%), the most commonly cited reason was that they couldn’t see AGI happening based on current progress in AI. (Source)
  • Participants were pretty split on whether they thought the alignment problem argument was valid. Some common reasons for disagreement were (source):
    1. A set of responses that included the idea that AI alignment problems would be solved over the normal course of AI development (caveat: this was a very heterogeneous tag).
    2. Pointing out that humans have alignment problems too (so the potential risk of the AI alignment problem is capped in some sense by how bad alignment problems are for humans).
    3. AI systems will be tested (and humans will catch issues and implement safeguards before systems are rolled out in the real world).
    4. The objective function will not be designed in a way that causes the alignment problem / dangerous consequences of the alignment problem to arise.
    5. Perfect alignment is not needed.
  • Participants were also pretty split on whether they thought the instrumental incentives argument was valid. The most common reasons for disagreement were that 1) the loss function of an AGI would not be designed such that instrumental incentives arise / pose a problem and 2) there would be oversight (by humans or other AI) to prevent this from happening. (Source)
  • Some participants brought up that they were more concerned about misuse of AI than AGI misalignment (n = 17), or that potential risk from AGI was less dangerous than other large-scale risks humanity faces (n = 11). (Source)
  • Of the 55 participants who were asked / had a response to this question, some (n = 13) were potentially interested in working on AI alignment research. (Caveat for bias: the interviewer was less likely to ask this question if the participant believed AGI would never happen and/or the alignment/instrumental arguments were invalid, so as to reduce participant frustration. This question also tended to be asked in later interviews rather than earlier interviews.) Of those participants potentially interested in working on AI alignment research, almost all reported that they would need to learn more about the problem and/or would need to have a more specific research question to work on or incentives to do so. Those who were not interested reported feeling like it was not their problem to address (they had other research priorities, interests, skills, and positions), that they would need examples of risks from alignment problems and/or instrumental incentives within current systems to be interested in this work, or that they felt like they were not at the forefront of such research so would not be a good fit. (Source)
  • Most participants had heard of AI safety (76%) in some capacity (source); fewer had heard of AI alignment (41%) (source).
  • When participants were followed-up with ~5-6 months after the interview, 51% reported the interview had a lasting effect on their beliefs (source), and 15% reported the interview caused them to take new action(s) at work (source).
  • Thinking the alignment problem argument was valid, or the instrumental incentives argument was valid, both tended to correlate with thinking AGI would happen at some point. The effect wasn’t symmetric: if participants thought these arguments were valid, they were quite likely to believe AGI would happen; if participants thought AGI would happen, it was still more likely that they thought these arguments were valid but the effect was less strong. (Source)

Tags

The tags were developed arbitrarily, with the goal of describing common themes in the data. These tags are succinct and not described in detail. Thus, to get a sense for what the tags mean, please search the tag name in the Tagged-Quotes document, which lists most of the tags used (column 1) and attached quotes (column 2). (This document is also available in Interviews.)

Many of the tags are also rephrased and included in the walkthrough of the interviews.

Limitations

There are two large methodological weaknesses that should be kept in mind when interpreting the results. First, not every question was asked of every researcher. While some questions were just added later in the interview process, some questions were intentionally asked or avoided based on interviewer judgment of participant interest; questions particularly susceptible to this have an “About this variable” section below to describe the situation in more detail.

The second issue is with the tagging, which was somewhat haphazard. One person (not the interviewer) did the majority of the tagging, while another person (the interviewer) assisted and occasionally made corrections. Tagging was not blinded, and importantly, tags were not comprehensively double-checked by the interviewer. If anyone reading this document wishes to do a more systematic tagging of the raw data, we welcome this: much of the raw data is available on this website for analysis, and we’re happy to be contacted for further advice.

With these caveats in mind, we think there is much to be learned from a quantitative analysis of these interviews and present the full results below.

Note: All error bars represent standard error.

About this Report

There are two versions of this report: one with interactive graphs, and one with static graphs. To access all of the features of this report, like hovering over graphs to see the number of participants in each category, you need to be using the interactive version. However, the static version loads significantly faster in a browser.

Demographics of Interviewees

Basic Demographics

Gender

genders Freq Perc
Female 8 8
Other 2 2
Male 87 90

Age

Proxy: Years from graduating undergrad + 22 years

Values present for 95/97 participants.

## mean: 31.3684210526316
## median: 30
## range: 19 - 56
## # with value of 0: 0

Location

Country of origin

Proxy: Undergrad country (Any country with only 1 participant got re-coded as ‘Other’)

Values present for 97/97 participants.

undergrad_country_simplified Freq
USA 27
Other 16
China 11
India 11
Canada 6
Germany 5
France 4
Italy 4
Iran 3
Israel 3
Taiwan 3
Turkey 2
UK 2

Current country of work

(Any country with only 1 participant got re-coded as ‘Other’)

Values present for 97/97 participants.

current_country_simplified Freq
USA 57
Other 10
Canada 9
UK 7
China 4
France 3
Switzerland 3
Germany 2
Israel 2

What area of AI?

Area of AI was evaluated in two ways. First, by asking the participant directly in the interview (Field1) and second, by looking up participants’ websites and Google Scholar Interests (Field2). A comparison of Field1 and Field2 is located here. The comparison isn’t particularly close, so we usually include comparisons using both Field1 and Field2. We tend to think the Field2 labels (from Google Scholar and websites) are more accurate than Field1, because the data was a little more regular and the tagger was more experienced. We also tend to think Field2 has better external validity: for both field1 and field2, we ran a correlation between proportion of participants in that field who found the alignment arguments valid and those who found the instrumental arguments valid. This correlation was much higher for field2 than field1. Given that we expect these two arguments are probing a similar construct, the higher correlation suggests better external validity for the field2 grouping.

Field 1 (from interview response)

“Can you tell me about what area of AI you work on, in a few sentences?”

Values are present for 97/97 participants.

Note: “NLP” = natural language processing. “RL” = reinforcement learning. “vision” = computer vision. “neurocogsci” = neuroscience or cognitive science. “near-term AI safety” = AI safety generally and related areas (includes robustness, privacy, fairness). “long-term AI safety” = AI alignment and/or AI safety oriented at advanced AI systems.