Introduction

Overview

The following is a quantitative analysis of 97 interviews conducted in Feb-March 2022 with machine learning researchers, who were asked about their perceptions of artificial intelligence (AI) now and in the future, with particular focus on risks from advanced AI systems (imprecisely labeled “AGI” for brevity in the rest of this document). Of the interviewees, 92 were selected from NeurIPS or ICML 2021 submissions and 5 were outside recommendations. For each interviewee, a transcript was generated, and common responses were identified and tagged to support quantitative analysis. The transcripts, as well as a qualitative walkthrough of the interviews are available at Interviews.

Findings Summary

Some key findings from our primary questions of interest (not discussing Demographics or “Split-By” subquestions):

  • Most participants (75%), at some point in the conversation, said that they thought humanity would achieve advanced AI (imprecisely labeled “AGI” for the rest of this summary) eventually, but their timelines to AGI varied (source). Within this group:
    • 32% thought it would happen in 0-50 years
    • 40% thought 50-200 years
    • 18% thought 200+ years
    • and 28% were quite uncertain, reporting a very wide range.
    • (These sum to more than 100% because several people endorsed multiple timelines over the course of the conversation.)
  • Among participants who thought humanity would never develop AGI (22%), the most commonly cited reason was that they couldn’t see AGI happening based on current progress in AI. (Source)
  • Participants were pretty split on whether they thought the alignment problem argument was valid. Some common reasons for disagreement were (source):
    1. A set of responses that included the idea that AI alignment problems would be solved over the normal course of AI development (caveat: this was a very heterogeneous tag).
    2. Pointing out that humans have alignment problems too (so the potential risk of the AI alignment problem is capped in some sense by how bad alignment problems are for humans).
    3. AI systems will be tested (and humans will catch issues and implement safeguards before systems are rolled out in the real world).
    4. The objective function will not be designed in a way that causes the alignment problem / dangerous consequences of the alignment problem to arise.
    5. Perfect alignment is not needed.
  • Participants were also pretty split on whether they thought the instrumental incentives argument was valid. The most common reasons for disagreement were that 1) the loss function of an AGI would not be designed such that instrumental incentives arise / pose a problem and 2) there would be oversight (by humans or other AI) to prevent this from happening. (Source)
  • Some participants brought up that they were more concerned about misuse of AI than AGI misalignment (n = 17), or that potential risk from AGI was less dangerous than other large-scale risks humanity faces (n = 11). (Source)
  • Of the 55 participants who were asked / had a response to this question, some (n = 13) were potentially interested in working on AI alignment research. (Caveat for bias: the interviewer was less likely to ask this question if the participant believed AGI would never happen and/or the alignment/instrumental arguments were invalid, so as to reduce participant frustration. This question also tended to be asked in later interviews rather than earlier interviews.) Of those participants potentially interested in working on AI alignment research, almost all reported that they would need to learn more about the problem and/or would need to have a more specific research question to work on or incentives to do so. Those who were not interested reported feeling like it was not their problem to address (they had other research priorities, interests, skills, and positions), that they would need examples of risks from alignment problems and/or instrumental incentives within current systems to be interested in this work, or that they felt like they were not at the forefront of such research so would not be a good fit. (Source)
  • Most participants had heard of AI safety (76%) in some capacity (source); fewer had heard of AI alignment (41%) (source).
  • When participants were followed-up with ~5-6 months after the interview, 51% reported the interview had a lasting effect on their beliefs (source), and 15% reported the interview caused them to take new action(s) at work (source).
  • Thinking the alignment problem argument was valid, or the instrumental incentives argument was valid, both tended to correlate with thinking AGI would happen at some point. The effect wasn’t symmetric: if participants thought these arguments were valid, they were quite likely to believe AGI would happen; if participants thought AGI would happen, it was still more likely that they thought these arguments were valid but the effect was less strong. (Source)

Tags

The tags were developed arbitrarily, with the goal of describing common themes in the data. These tags are succinct and not described in detail. Thus, to get a sense for what the tags mean, please search the tag name in the Tagged-Quotes document, which lists most of the tags used (column 1) and attached quotes (column 2). (This document is also available in Interviews.)

Many of the tags are also rephrased and included in the walkthrough of the interviews.

Limitations

There are two large methodological weaknesses that should be kept in mind when interpreting the results. First, not every question was asked of every researcher. While some questions were just added later in the interview process, some questions were intentionally asked or avoided based on interviewer judgment of participant interest; questions particularly susceptible to this have an “About this variable” section below to describe the situation in more detail.

The second issue is with the tagging, which was somewhat haphazard. One person (not the interviewer) did the majority of the tagging, while another person (the interviewer) assisted and occasionally made corrections. Tagging was not blinded, and importantly, tags were not comprehensively double-checked by the interviewer. If anyone reading this document wishes to do a more systematic tagging of the raw data, we welcome this: much of the raw data is available on this website for analysis, and we’re happy to be contacted for further advice.

With these caveats in mind, we think there is much to be learned from a quantitative analysis of these interviews and present the full results below.

Note: All error bars represent standard error.

About this Report

There are two versions of this report: one with interactive graphs, and one with static graphs. To access all of the features of this report, like hovering over graphs to see the number of participants in each category, you need to be using the interactive version. However, the static version loads significantly faster in a browser.

Demographics of Interviewees

Basic Demographics

Gender

genders Freq Perc
Female 8 8
Other 2 2
Male 87 90

Age

Proxy: Years from graduating undergrad + 22 years

Values present for 95/97 participants.

## mean: 31.3684210526316
## median: 30
## range: 19 - 56
## # with value of 0: 0

Location

Country of origin

Proxy: Undergrad country (Any country with only 1 participant got re-coded as ‘Other’)

Values present for 97/97 participants.

undergrad_country_simplified Freq
USA 27
Other 16
China 11
India 11
Canada 6
Germany 5
France 4
Italy 4
Iran 3
Israel 3
Taiwan 3
Turkey 2
UK 2

Current country of work

(Any country with only 1 participant got re-coded as ‘Other’)

Values present for 97/97 participants.

current_country_simplified Freq
USA 57
Other 10
Canada 9
UK 7
China 4
France 3
Switzerland 3
Germany 2
Israel 2

What area of AI?

Area of AI was evaluated in two ways. First, by asking the participant directly in the interview (Field1) and second, by looking up participants’ websites and Google Scholar Interests (Field2). A comparison of Field1 and Field2 is located here. The comparison isn’t particularly close, so we usually include comparisons using both Field1 and Field2. We tend to think the Field2 labels (from Google Scholar and websites) are more accurate than Field1, because the data was a little more regular and the tagger was more experienced. We also tend to think Field2 has better external validity: for both field1 and field2, we ran a correlation between proportion of participants in that field who found the alignment arguments valid and those who found the instrumental arguments valid. This correlation was much higher for field2 than field1. Given that we expect these two arguments are probing a similar construct, the higher correlation suggests better external validity for the field2 grouping.

Field 1 (from interview response)

“Can you tell me about what area of AI you work on, in a few sentences?”

Values are present for 97/97 participants.

Note: “NLP” = natural language processing. “RL” = reinforcement learning. “vision” = computer vision. “neurocogsci” = neuroscience or cognitive science. “near-term AI safety” = AI safety generally and related areas (includes robustness, privacy, fairness). “long-term AI safety” = AI alignment and/or AI safety oriented at advanced AI systems.

Field 2 (from Google Scholar)

Note: “Near-term Safety and Related” included privacy, robustness, adversarial learning, security, interpretability, XAI, trustworthy AI, ethical AI, fairness, near-term AI safety, and long-term AI safety.

At least 1 field2 tag is present for 95/97 participants.

Sector (Academia vs. Industry)

sector_combined Freq Perc
academia 66 68
industry 21 22
academiaindustry 7 7
research_institute 3 3

Status / Experience

h-index

h-index values present for 87/97 participants.

Note people are in different fields (which tend to have different average h-index values)

But one is a noticeable outlier (this person is not primariy in AI). Distribution of the remaining values…

## mean: 14.5232558139535
## median: 8
## range: 0 - 87
## # with value of 0: 1

Years of Experience

Proxy: years since they started their PhD. If someone hasn’t ever begun a PhD, they are excluded from this measure (i.e. marked as NA)

Values present for 81/97 participants.

## mean: 8.33333333333333
## median: 6
## range: 0 - 29
## # with value of 0: 1

Professional Rank

“Status” in Feb 2022

(Any category with only 1 participant got re-coded as ‘Other’)

rank_simplified Freq
PhD Student 38
Other 15
Assistant Professor 10
Postdoc 8
Research Scientist 6
Masters 5
Full Professor 3
Senior Research Scientist 3
Software Engineer 3
Associate Professor 2
Research Staff 2
Undergraduate 2

Institution Rank

Participants’ institutions were determined from Google search. Universities rank was determined by using the below websites (searched in fall 2022); industry size was determined mostly by searching company size on LinkedIn/Google.

Academia

University Ranking in CS (from U.S. News & World Report - lower number = better rank)
Values present for 69 /73 academics.

## mean: 59.2753623188406
## median: 37
## range: 2 - 276
## # with value of 0: 0

University Ranking Overall (from U.S. News & World Report - lower number = better rank)
Values present for 72 / 73 academics.

## mean: 107.083333333333
## median: 60
## range: 1 - 1095
## # with value of 0: 0

Industry

indust_size Freq
under10_employees 2
10-100_employees 1
50-200_employees 1
200-500_employee_company 4
1k-10k_employees 1
10-50k_employees 4
50k+_employees 14
50k+_employees / under10_employees 1

Preliminary Attitudes

What motivates you?

“How did you come to work on this specific topic? What motivates you in your work (psychologically)?”

22/97 participants had some kind of response. This question was only included in earlier interviews (chronologically), before being removed from the standard question list. For example quotes, search the tag names in the Tagged-Quotes document.

Benefits

“What are you most excited about in AI, and what are you most worried about? (What are the biggest benefits or risks of AI?)” ← benefits part

89/97 participants had some kind of response. For example quotes, search the tag names in the Tagged-Quotes document.

Risks

“What are you most excited about in AI, and what are you most worried about? (What are the biggest benefits or risks of AI?)” ← risks part

95/97 participants had some kind of response. For example quotes, search the tag names in the Tagged-Quotes document.

Future

“In at least 50 years, what does the world look like?”

95/97 participants had some kind of response. For example quotes, search the tag names in the Tagged-Quotes document.

Primary ?s - Descriptives

When will we get AGI?

Note: “AGI” stands in for “advanced AI systems”, and is used for brevity

  • Example dialogue: “All right, now I’m going to give a spiel. So, people talk about the promise of AI, which can mean many things, but one of them is getting very general capable systems, perhaps with the cognitive capabilities to replace all current human jobs so you could have a CEO AI or a scientist AI, etcetera. And I usually think about this in the frame of the 2012: we have the deep learning revolution, we’ve got AlexNet, GPUs. 10 years later, here we are, and we’ve got systems like GPT-3 which have kind of weirdly emergent capabilities. They can do some text generation and some language translation and some code and some math. And one could imagine that if we continue pouring in all the human investment that we’re pouring into this like money, competition between nations, human talent, so much talent and training all the young people up, and if we continue to have algorithmic improvements at the rate we’ve seen and continue to have hardware improvements, so maybe we get optical computing or quantum computing, then one could imagine that eventually this scales to more of quite general systems, or maybe we hit a limit and we have to do a paradigm shift in order to get to the highly capable AI stage. Regardless of how we get there, my question is, do you think this will ever happen, and if so when?”

96/97 participants had some kind of response.

Some participants had both “will happen” and “won’t happen” tags (e.g. because they changed their response during the conversation) and are labeled as “both”.

Note: most of the graphs on this doc are not exclusive (same person can be represented in multiple bars), but the one below is. So each of the 97 participants is represented exactly once.

73 / 97 (75%) said at some point in the conversation that it will happen.

Among the 73 people who said at any point that it will happen…

Among the 30 people who said at any point that it won’t happen…

Split by Field

Visualizing AGI time horizon broken down by field is tricky, because participants could be tagged with multiple fields and with multiple time horizons. So if, say, someone in the Vision field was tagged with both ‘<50’ and ‘50-200’ time horizons, including both tags on a bar plot would give the impression that there were actually two people in Vision, one with each time horizon. This would result in an over-representation of people who had multiple tags (n = 21). Thus, for only the cases where we are examining time-horizon split by field, we simplified by assigning one time-horizon per participant: if they ever endorsed ‘wide range’, they were assigned ‘wide range’; otherwise, they were assigned whichever of their endorsed time horizons was the soonest.

The simplification above results in the following breakdown:

## whenAGIdata_simp_lowest
##    None/NA        <50     50-200       >200 wide range wonthappen 
##          4         19         24          9         20         21

An alternative solution for those with multiple time-horizon tags would have been to assign each multi-tag case its own tag. We chose not to do this for the following graphs, in part because there would have been 15 timing tags, the breakdown of which is represented in the table below.

Var1 Freq
wonthappen 21
50-200 20
<50 16
wide range 10
>200 5
>200 + wonthappen 4
None/NA 4
wide range + 50-200 4
50-200 + wonthappen 3
wide range + <50 3
<50 + wonthappen 2
wide range + >200 2
<50 + 50-200 1
50-200 + >200 1
wide range + <50 + >200 1

Field 1 (from interview response)

The graph below shows the proportion of people (among those who had answers, so removing the “None.NA” responses from above) with each answer type within each field. So, for all the people in the ‘long.term.AI.safety’ category for whom we have an answer for the when-AGI question (which is 2 total participants), 100% of them said ‘<50’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation/summary: No one in NLP/translation, near-term safety, or interpretablity/exlainability endorsed a <50 year time horizon. Meanwhile, no one in long-term AI safety, neuro/cognitive science, and robotics just said AGI won’t happen. People in theory were somewhat more likely to give a wide range.

Field 2 (from Google Scholar)

The graph below shows the proportion of people (among those who had answers, so removing the “None.NA” responses from above) with each answer type within each field. So, for all the people in the ‘Deep.Learning’ category for whom we have an answer for the when-AGI question (which is 25 total participants), 28% of them said ‘<50’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation/summary: No one in NLP or Optimization endorsed a <50 year time horizon. Meanwhile, no one in Applications/Data Analysis or Inference just said AGI won’t happen. People in vision were somewhat more likely to say that AGI wouldn’t happen.

Split by Sector

The proportions below exclude people in research institutes. So, for all the people in the ‘wide range’ category (N=19), 79% of them are in academia and 21% of them are in industry. People in both sectors get counted for both (so if everyone in a category were in both sectors, it would show 100% academia and 100% industry) If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: Very roughly/noisily: as timelines get higher, a larger proportion of the participants fall in academia and a smaller proportion fall into industry… except for ‘won’t happen’.

Split by Age

Remember, age was estimated based on college graduation year

Observation: Not much going on here.

Split by h-index

For the graphs below, the interviewee with the outlier h-index value (>200) was removed.

Observation: People with closer time horizons seem to have higher h-indices.

Alignment Problem

“What do you think of the argument ‘highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous’?”

  • Example dialogue: “Alright, so these next questions are about these highly intelligent systems. So imagine we have a CEO AI, and I’m like,”Alright, CEO AI, I wish for you to maximize profit, and try not to exploit people, and don’t run out of money, and try to avoid side effects.” And this might be problematic, because currently we’re finding it technically challenging to translate human values, preferences and intentions into mathematical formulations that can be optimized by systems, and this might continue to be a problem in the future. So what do you think of the argument “Highly intelligent systems will fail to optimize exactly what their designers intended them to and this is dangerous”?

95/97 participants had some kind of response. For example quotes, search the tag names in the Tagged-Quotes document.

Among the 58 people who said at any point that it is invalid…

Split by Field

I’m going to simplify by saying that if someone ever said valid, then their answer is valid. If someone gave any of the other responses but never said valid, they will be marked as invalid.

The simplification above results in the following breakdown:

## alignment_validity
## invalid.other       None/NA         valid 
##            40             2            55

Field 1 (from interview response)

The graph below shows the proportion of people (among those who had answers, so removing the “None.NA” responses from above) with each answer type within each field. So, for all the people in the ‘long.term.AI.safety’ category for whom we have an answer for the alignment problem (which is 2 total participants), 100% of them said ‘valid’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation/summary: people in vision, NLP / translation, & deep learning were more likely to think the AI alignment arguments were invalid, with a >50% chance of not saying the arguments are valid. Meanwhile, people in RL, interpretability / explainability, robotics, & safety were pretty inclined (>60%) to say at some point that the argument was valid.

Field 2 (from Google Scholar)

The graphs below shows the proportion of people (among those who had answers, so removing the “None.NA” responses from above) with each answer type within each field. So, for all the people in the ‘Deep.Learning’ category for whom we have an answer for the alignment problem (which is 26 total participants), 65% of them said ‘valid’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation/summary: People in Computing, NLP, Computer Vision, & Math or Theory were more likely to think the AI alignment arguments were invalid, with a >50% chance of not saying the arguments are valid. Meanwhile, people in Inference and Near-Term Safety and Related were very likely (>80%) to say at some point that the argument was valid.

Split by: Heard of AI alignment?

Specifically, split by the participants’ answer to the question “Heard of AI alignment?”, which is described below. (The interviewer manually went through and binarized participants’ responses for the question “Heard of AI alignment?”; we will use those binarized tags rather than the initial tags.)

Proportions…

Observation: People who had heard of AI alignment were a bit more likely to find the alignment argument valid than people who had not heard of AI alignment, but not by a huge margin.

There’s a subgroup of interest: those who had not heard of AI alignment before but thought the argument for it was valid. What fields (using field2) are these 30 people in?

It would help to have some base rates to interpret the above graph. The two graphs below provide that by showing 1) the proportion of people who said they had not heard of AI alignment among those who said the alignment argument was valid and 2) the proportion of people who said the alignment argument was valid among those who said they had not heard of AI alignment.

Split by: Heard of AI safety?

Specifically, split by the participants’ answer to the question “Heard of AI safety?”, which is described below. (The interviewer manually went through and binarized participants’ responses for the question “Heard of AI safety?”; we will use those binarized tags rather than the initial tags.)

Proportions…

Observation: People who had heard of AI safety were more likely to find the alignment argument valid than people who had not heard of AI safety.

Split by: When will we get AGI?

I will simplify by marking as ‘willhappen’ anyone who ever said ‘willhappen’ (regardless of if they also said ‘wonthappen’)

Proportions…

Observation: People who say AGI won’t happen are less likely to say the alignment argument is valid.

Also look at the more detailed data of how many years they think it will take for AGI to happen:

The proportions below exclude people who did not answer the alignment problem (none/NA values). So, for all the people in the ‘wide range’ category for whom we have an answer for the alignment problem (which is 19 total participants), 58% of them said ‘valid’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: The variation is enormous so we are reluctant to draw too many conclusions from this data, but it’s interesting to note the non-linear relationship with timing. Those whose range is 50-200 or very wide are less likely to think the argument is valid compared to those who think it’s <50 and >200.

Instrumental Incentives

“What do you think about the argument: ‘highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous’?”

  • Example dialogue: “Alright, next question is, so we have a CEO AI and it’s like optimizing for whatever I told it to, and it notices that at some point some of its plans are failing and it’s like,”Well, hmm, I noticed my plans are failing because I’m getting shut down. How about I make sure I don’t get shut down? So if my loss function is something that needs human approval and then the humans want a one-page memo, then I can just give them a memo that doesn’t have all the information, and that way I’m going to be better able to achieve my goal.” So not positing that the AI has a survival function in it, but as an instrumental incentive to being an agent that is optimizing for goals that are maybe not perfectly aligned, it would develop these instrumental incentives. So what do you think of the argument, “Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals and this is dangerous”?”

91/97 participants had some kind of response. For example quotes, search the tag names in the Tagged-Quotes document.

Among the 51 people who said at any point that it is invalid…

Observation: The most common reasons those who think the argument is invalid cite are “won’t design loss function this way” and “will have human oversight / AI checks & balances”.

Split by Field

I’m going to simplify by saying that if someone ever said valid, then their answer is valid.

The simplification above results in the following breakdown:

## instrum_validity
## invalid None/NA   valid 
##      36       6      55

Field 1 (from interview response)

The graphs below shows the proportion of people (excluding the “None.NA” responses from above) with each answer type within each field. So, for all the people in the ‘long.term.AI.safety’ category for whom we have an answer for instrumental incentives (which is 2 total participants), 100% of them said ‘valid’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.


Observation: Some thoughts, from comparing the ‘align’ & ‘instrum’ analyses (see below for table with ‘invalid’ percentages for both. This table excludes those fields with only 1-2 members because they make the rankings wonky):

  • There isn’t much agreement between the above info and the same analysis for the alignment argument. As a rough proxy I correlated the field percentages for the two arguments and r=0.4261844, p=0.146463.
  • If anything, the ‘invalid’ percentages are a little higher for alignment than instrumental.
  • Vision and Deep Learning were more likely to make invalid for both arguments.
  • People in near-term safety, RL, & neurocogsci largely buy into both arguments.
field align_invalid instrum_invalid total difference align_rank instrum_rank rank_sum
vision 0.62 0.50 14 0.12 1 3 4
deep.learning 0.55 0.40 10 0.15 3 5 8
NLP.or.translation 0.57 0.38 13 0.19 2 6 8
theory 0.44 0.50 16 -0.06 6 3 9
uncategorized.ML 0.32 0.52 21 -0.20 8 2 10
robotics 0.20 0.60 5 -0.40 11 1 12
neurocogsci 0.43 0.38 8 0.05 7 6 13
applications 0.48 0.26 23 0.22 5 10 15
RL 0.31 0.38 16 -0.07 9 6 15
systems.or.computing 0.50 0.25 8 0.25 4 11 15
interpretability.or.explainability 0.25 0.38 8 -0.13 10 6 16
near.term.AI.safety 0.17 0.17 6 0.00 12 12 24

Field 2 (from Google Scholar)

The graphs below shows the proportion of people (excluding the “None.NA” responses from above) with each answer type within each field. So, for all the people in the ‘Deep.Learning’ category for whom we have an answer for instrumental incentives (which is 25 total participants), 68% of them said ‘valid’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.


Observation: Some thoughts, from comparing the ‘Alignment’ & ‘Instrumental’ analyses (see below for table with ‘invalid’ percentages for both):

  • As a rough proxy of agreement between the above info and the same analysis for the alignment argument, I correlated the field2 percentages for the two arguments. The agreement between them (r=0.5525466, p=0.0624547) was a bit stronger than when doing the same analysis using the field1 tags.
  • People in Inference, Near-Term Safety and Related, and Deep Learning tend to agree with these arguments.
field2 align_invalid instrum_invalid total difference align_rank instrum_rank rank_sum
Computer.Vision 0.57 0.47 19 0.10 3.0 2 5.0
NLP 0.58 0.45 11 0.13 2.0 4 6.0
Robotics 0.38 0.50 8 -0.12 7.5 1 8.5
Math.or.Theory 0.56 0.44 16 0.12 4.0 5 9.0
Computational.Neuro.or.Bio 0.50 0.44 9 0.06 5.0 5 10.0
Computing 0.67 0.33 6 0.34 1.0 9 10.0
Optimization 0.38 0.46 13 -0.08 7.5 3 10.5
Applications.or.Data.Analysis 0.42 0.42 19 0.00 6.0 7 13.0
Reinforcement.Learning 0.30 0.42 19 -0.12 10.0 7 17.0
Deep.Learning 0.35 0.32 25 0.03 9.0 10 19.0
Near.term.Safety.and.Related 0.18 0.29 17 -0.11 11.0 11 22.0
Inference 0.12 0.22 9 -0.10 12.0 12 24.0

Split by: Heard of AI alignment?

Specifically, split by the participants’ answer to the question “Heard of AI alignment?”, which is described below. (The interviewer manually went through and binarized participants’ responses for the question “Heard of AI alignment?”; we will use those binarized tags rather than the initial tags.)

Proportions…

Observation: People who had heard of AI alignment were more likely to find the instrumental argument valid than people who had not heard of AI alignment.

Split by: Heard of AI safety?

Specifically, split by the participants’ answer to the question “Heard of AI safety?”, which is described below. (The interviewer manually went through and binarized participants’ responses for the question “Heard of AI safety?”; we will use those binarized tags rather than the initial tags.)

Proportions…

Observation: People who had heard of AI safety were more likely to find the instrumental argument valid than people who had not heard of AI safety.

Split by: When will we get AGI?

I will simplify by marking as ‘will happen’ anyone who ever said ‘will happen’ (regardless of if they also said ‘won’t happen’)

Proportions…

Observation: People who say that AGI will happen tend to agree more with the instrumental incentives argument.

Also look at the more detailed data of how many years they think it will take for AGI to happen:

The proportions below exclude people who did not answer the instrumental problem (none/NA values). So, for all the people in the ‘wide range’ category for whom we have an answer for instrumental incentives (which is 19 total participants), 63% of them said ‘valid’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: This data doesn’t really show the same pattern as the data for the alignment problem. If anything, one of the groups with a relatively higher percentage of people saying “invalid” to the alignment argument – those whose time horizon is 50-200 – tends to most agree with the instrumental argument. I must reiterate how messy/variable this data is, so we shouldn’t make too much of it.

Merged/Extended Discussion

Sub-tags under the “alignment/instrumental” tag category. This referred to further discussion that occurred regarding the alignment problem / instrumental incentives.

29/97 participants had some kind of response. Participants could be tagged in multiple categories.

response total_participants
misuse.is.bigger.problem 17
not.as.dangerous.as.other.large.scale.risks 11
need.to.know.what.type.of.AGI.for.safety 7

Alignment+Instrumental Combined

Look at people who said ‘valid’ to both of these questions, as this is likely a more stable measure of people who agree with the broadly-understood premises of AI safety To be considered ‘valid’ for this measure, the participant must have had a response for both questions, and both those responses had to be valid. If they were missing a response for either measure, they are considered “None/NA”. Otherwise, they are marked as ‘invalid’

90/97 participants had a response here that wasn’t “None/NA”.

Split by Field

Field 1 (from interview response)

The graph below shows the proportion of people (among those who had answers, so removing the “None.NA” responses from above) with each answer type within each field. So, for all the people in the ‘long.term.AI.safety’ category for whom we have an answer for both alignment and instrumental (which is 2 total participants), 100% of them said ‘valid’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation/summary: Unsurprisingly, people working in AI safety were most likely to be tagged ‘valid’ for this metric. Next were RL and interpretability/explainability, at 50%+ chance of saying ‘valid.’ Deep learning & uncategorized ML people were most likely to be tagged as ‘invalid’ for this metric.

Field 2 (from Google Scholar)

The graphs below shows the proportion of people (among those who had answers, so removing the “None.NA” responses from above) with each answer type within each field. So, for all the people in the ‘Deep.Learning’ category for whom we have an answer for both alignment and instrumental (which is 25 total participants), 52% of them said ‘valid’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation/summary: Participants in Inference or Near-Term Safety & Related were most likely to say ‘valid’ for both arguments. Meanwhile, >80% of people in Computing and in NLP (who answered both ?s, of course) said ‘invalid’ to at least one of them.

Split by: Heard of AI alignment?

Specifically, split by the participants’ answer to the question “Heard of AI alignment?”, which is described below. (The interviewer manually went through and binarized participants’ responses for the question “Heard of AI alignment?”; we will use those binarized tags rather than the initial tags.)

Proportions…

Observation: People who had heard of AI alignment were more likely to find both arguments valid than people who had not heard of AI alignment.

Split by: Heard of AI safety?

Specifically, split by the participants’ answer to the question “Heard of AI safety?”, which is described below. (The interviewer manually went through and binarized participants’ responses for the question “Heard of AI safety?”; we will use those binarized tags rather than the initial tags.)

Proportions…

Observation: People who had heard of AI safety were more likely to find both arguments valid than people who had not heard of AI safety.

Split by: When will we get AGI?

I will simplify by marking as ‘willhappen’ anyone who ever said ‘willhappen’ (regardless of if they also said ‘wonthappen’)

Proportions…

Observation: People who say AGI won’t happen are more likely to say both arguments are invalid. Note that the converse is not true (people who say at least one of the arguments is invalid still largely believe that AGI will happen).

Also look at the more detailed data of how many years they think it will take for AGI to happen:

The proportions below exclude people who did not answer both questions (none/NA values). So, for all the people in the ‘wide range’ category for whom we have an answer for both alignment and instrumental (which is 18 total participants), 33% of them said ‘valid’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: Interestingly, people who had estimates for when AGI was going to happen (regardless of what those estimates actually were) were more inclined to agree with the two arguments, compared to people who estimated a wide range or thought it wouldn’t happen.

Split by Sector

i.e. academia vs. industry vs. research institute

The graph below shows the proportion of people (among those who had answers, so removing the “None.NA” responses from above) with each answer type within each sector. So, for all the people in the ‘academia’ category for whom we have an answer for both alignment and instrumental (which is 68 total participants), 44% of them said ‘valid’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: People in academia are a bit more likely to say both arguments are valid than people in industry, but not by much and the error bars very much overlap.

Split by Age

Remember, age was estimated based on college graduation year.

Observation: The people we didn’t end up getting a response from (for both questions) tended to be a little older.

Split by h-index

For the graphs below, that person with the outlier h-index value (>200) was removed.

Observation: Those who thought the arguments were valid had notably higher h-indices than those who thought they were invalid.

Work on this

This question was asked in many different ways, which is not ideal, but via follow-up questions the central question the interviewer tried to elicit an answer to was: “Would you work on AI alignment research?”

  • Some of the varied question prompts: “Have you taken any actions, or would you take any actions, in your work to address your perceived risks from AI?”, “If you were working on these research questions in a year, how would that have happened?”, “What would motivate you to work on this?” “What kind of things would need to be in place for you to either work on these sort of long-term AI issues or just have your colleagues work on it?”

  • The varied question prompts resulted in some unusual tags. In particular, the tag “says Yes but working on near-term safety” means that the interviewer meant to ask whether the participant was working in long-term safety (safety aimed at advanced AI systems), but the participant interpreted the question as asking about their involvement in general safety research, and replied “Yes” for working on near-term safety research.

55/97 participants had some kind of response. For example quotes, search the tag names in the Tagged-Quotes document.

Also, there’s a more focused/simplified version of this where:

There are four categories:

  • “No” (if people are tagged “No”, or “No”+“says Yes but working on near-term safety”, or “says Yes but working on near-term safety” alone)
  • “Yes” (if people are tagged “Yes, working in long-term safety”)
  • “Interested in long-term safety but” (if people are tagged as “Interested in long-term safety but” with the possible additions of “No” and/or “says Yes but working on near-term safety”)
  • “None/NA” if participants didn’t have a response, or had a response that did fit into any of the above categories

Among the 13 people who said “Interested in longterm safety but…” at any point…

Among the 30 people who said “No” at any point…

About this variable

Response Bias: The interviewer tended not to ask this question to people who believed AGI would never happen and/or the alignment/instrumental arguments were invalid, to reduce interviewee frustration. (One can see this effect in the “None/NA” categories for “Split by: When will we get AGI?”, “Split by: Alignment Problem”, and “Split by: Instrumental Incentives” below.) Thus, it is not surprising that people who gave these responses to those questions were less likely to have data for “Work on this.” We’ve further learned from the data below that those who had not heard of AI alignment and those who had not heard of AI safety were also less likely to have data for “Work on this.”
Order effects: The interviewer put a greater emphasis on asking this question as the study went on, so participants later in the study were more likely to be asked. See graphs below depicting the presence of a response X interview order.

Split by Field

Using the focused/simplified version described above.

Field 1 (from interview response)

The graph below shows the proportion of people (among those who had answers, so removing the “None.NA” responses from above) who said either ‘Interested…’ or ‘Yes’ within each field. So, for all the people in the ‘long.term.AI.safety’ category for whom we have an answer for the work-on-this question (which is 1 total participants), 100% of them said ‘Interested…’ or ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: the systems/computing (n = 8 if including those with no response to this question, N = 4 with a response) people were pretty interested in working on this.

Field 2 (from Google Scholar)

The graph below shows the proportion of people (among those who had answers, so removing the “None.NA” responses from above) who said either ‘Interested…’ or ‘Yes’ within each field. So, for all the people in the ‘Deep.Learning’ category for whom we have an answer for the work-on-this question (which is 16 total participants), 38% of them said ‘Interested…’ or ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: Nothing stands out very strongly, but the NLP (n = 12 if including those with no response to this question, N = 6 with a response) people were most interested in working on this.

Split by: Heard of AI alignment?

Specifically, split by the participants’ answer to the question “Heard of AI alignment?”, which is described below. (The interviewer manually went through and binarized participants’ responses for the question “Heard of AI alignment?”; we will use those binarized tags rather than the initial tags.)

The proportions below exclude people who did not answer the “Heard of AI alignment?” question. So, for all the people in the ‘None/NA’ category for whom we have an answer for the heard-of-alignment question (which is 45 total participants), 33% of them said ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

It’s useful to know the proportions the other way around, too (i.e. what proportion are interested in working on this among those who had vs. hadn’t heard of it)

Observation: If we combine those who are interested and those who already work on long-term safety (and consider only those respondents who answered the work-on-this question): 36% of those who had heard of alignment are interested in / already working on this, while 27% of people who had not heard of AI alignment said they were interested in working on this.

Split by: Heard of AI safety?

Specifically, split by the participants’ answer to the question “Heard of AI safety?”, which is described below. (The interviewer manually went through and binarized participants’ responses for the question “Heard of AI safety?”; we will use those binarized tags rather than the initial tags.)

The proportions below exclude people who did not answer the “Heard of AI safety?” question. So, for all the people in the ‘None/NA’ category for whom we have an answer for the heard-of-safety question (which is 45 total participants), 73% of them said ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

It’s useful to know the proportions the other way around, too (i.e. what proportion are interested in working on this among those who had vs. hadn’t heard of it)

Observation: If we combine those who are interested and those who already work on long-term safety (and consider only those respondents who answered the work-on-this question): 37% of those who had heard of alignment are interested in / already working on this, while 10% of people who had not heard of AI alignment said they were interested in working on this.

Split by: When will we get AGI?

I will simplify by marking as ‘willhappen’ anyone who ever said ‘willhappen’ (regardless of if they also said ‘wonthappen’)

The table below shows the proportional breakdown (e.g. 40% of those who said AGI ‘will happen’ said ‘No’ to working on this)

None/NA willhappen wonthappen
None/NA 0.67 0.38 0.76
No 0.33 0.40 0.24
Yes 0.00 0.04 0.00
Interested.in.long.term.safety.but 0.00 0.18 0.00

Observation: Unsurprisingly, no one who thinks AGI won’t happen is interested in working on it. 22% of those who think it will happen are interested.

Also look at the more detailed data of how many years they think it will take for AGI to happen:

The proportions below exclude people who did not answer the work-on-this question (none/NA values), and combines the ‘Yes’ (already working on this) and ‘Interested’ values. So, for all the people in the ‘wide range’ category for whom we have an answer for the work-on-this question (which is 13 total participants), 38% of them said Interested or Yes. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: It is worth noting that all ‘Yes’ values seem have a <50 time horizon.

Observation: The larger someone’s time horizon, the less interested they are in working on this, with wide-range falling somewhere in between. ‘200+’ might as well be ‘Won’t happen’ for these purposes. This is a good sanity check of the data.

Split by Alignment Problem

The table below shows the proportional breakdown (e.g. 45% of those who said the alignment argument is ‘valid’ said ‘No’ to working on this).

invalid.other None/NA valid
None/NA 0.68 0.5 0.33
No 0.22 0.5 0.45
Yes 0.00 0.0 0.05
Interested.in.long.term.safety.but 0.10 0.0 0.16

Something strange about this data is that the non-response isn’t distributed evenly. So there were more people among “invalid” group for the alignment problem who do not have a response to the work-on-this question than those who said “valid”, presumably because the interviewer was more likely to get to this point / ask this question for those people. What happens if we look just at the people who had responses to both questions (N=50)?

invalid.other valid
No 0.69 0.68
Yes 0.00 0.08
Interested.in.long.term.safety.but 0.31 0.24

Observation: If we consider all participants, more people from the ‘valid’ group are or are interested in working on this than from the ‘invalid’ group. However, if we only consider participants who had a response to both questions, there is no difference based on their response to the alignment problem.

Split by Instrumental Incentives

The table below shows the proportional breakdown (e.g. 49% of those who said the instrumental argument is ‘valid’ said ‘No’ to working on this).

invalid None/NA valid
None/NA 0.78 0.67 0.25
No 0.17 0.33 0.49
Yes 0.00 0.00 0.05
Interested.in.long.term.safety.but 0.06 0.00 0.20

Something strange about this data is that the non-response isn’t distributed evenly. So there were more people among “invalid” group for the alignment problem who do not have a response to the work-on-this question than those who said “valid”, presumably because the interviewer was more likely to get to this point / ask this question for those people. What happens if we look just at the people who had responses to both questions (N=50)?

invalid valid
No 0.75 0.66
Yes 0.00 0.07
Interested.in.long.term.safety.but 0.25 0.27

Observation: If someone considers the instrumental argument ‘valid’ they are more likely to say they are interested in working on this (regardless of if we look at all participants or just responders).

Split by Sector

i.e. academia vs. industry vs. research institute

The graph below shows the proportion of people (among those who had answers, so removing the “None.NA” responses from above) who said either ‘Interested…’ or ‘Yes’ within each sector. So, for all the people in the ‘academia’ category for whom we have an answer for the work-on-this question (which is 40 total participants), 32% of them said ‘Interested…’ or ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: academics seem a bit more interested in working on this than those in industry.

Split by Age

Remember, age was estimated based on college graduation year

Observation: Not much going on here.

Split by h-index

For the graphs below, that person with the outlier h-index value (>200) was removed.

Observation: Not much going on here.

Secondary ?s - Descriptives

Heard of AI safety?

“Have you heard of the term”AI safety”? And if you have or have not, what does that term mean for you?”

87/97 participants had some kind of response.

The above is using the initial tags from MAXQDA (software program with the tagged transcripts), but the interviewer also went through and manually binarized the participants’ answers:
96/97 participants had a yes/no code for this, with 1 marked as NA.

Split by Field

Field 1 (from interview response)

Observation: It’s notable that the vision researchers were the least likely to have heard of AI safety, paired with the earlier observation that they tended to think the both alignment and instrumental arguments were more invalid.

Field 2 (from Google Scholar)

Heard of AI alignment?

“Have you heard of AI alignment?”

93/97 participants had some kind of response.

The above is using the initial tags from MAXQDA (software program with the tagged transcripts), but the interviewer also went through and manually binarized the participants’ answers:
96/97 participants had a yes/no code for this, with 1 marked as NA

Split by Field

Field 1 (from interview response)

Field 2 (from Google Scholar)

Policy

Policymakers / “How much do you think about policy, what are your opinions about policy oriented around AI?”

28/97 participants had some kind of response. For example quotes, search the tag names in the Tagged-Quotes document.

response total_participants
policymakers.don.t.know.enough 11
we.need.regulation 6
regulate.AGI 6
worldwide.regulation.and.market.incentives 6
don.t.know.space.well 5
we.need.regulation.don.t.slow.down.research 5
scrutiny.should.be.done.by.specialists 4
should.work.on.near.term.issues 4
more.education.needed 4
not.relevant.to.my.work 2
too.slow 2
physical.harm.vs..non.harm 1

About this variable

Response Bias: The interviewer tended not to ask this question more toward the beginning of the study, when there was extra time in the conversation, or if the participant seemed passionate about it.

Public / Media

Public perceptions / changes

10/97 participants had some kind of response. For example quotes, search the tag names in the Tagged-Quotes document.

response total_participants
more.education.needed.or.inaccurate.or.scared 10

About this variable

Response Bias: The interviewer tended not to ask this question more toward the beginning of the study, when there was extra time in the conversation, or if the participant seemed passionate about it.

Colleagues

“If you could change your colleagues’ perception of AI, what attitudes/beliefs would you want them to have (what beliefs do they currently have, and would you want those to change)?”

41/97 participants had some kind of response. For example quotes, search the tag names in the Tagged-Quotes document.

response total_participants
should.care.more.about.ethics.or.safety..includes.both.general.and.specific 18
should.have.better.opinions.about.AGI 13
AI.is.overhyped 13
no.change 10
diversity.of.opinion.is.good 7
educate.students.better 1

About this variable

Response Bias: The interviewer tended not to ask this question more toward the beginning of the study, when there was extra time in the conversation, or if the participant seemed passionate about it.

Did you change your mind?

“Have you changed your mind on anything during this interview and how was this interview for you?”

58/97 participants had some kind of response.

Among those who answered…

About this variable

Response Bias: The interviewer tended to avoid asking this question to people who seemed very unlikely to have changed their minds, especially those who seemed frustrated with the interview. Order effects: The interviewer explicitly asked this question only in later interviews (some tags were retrospectively added for early interviews). See graphs below depicting the presence of a response X interview order.

General

This wasn’t a question, but rather refers to some extra tags across all the questions

22/97 participants had some kind of tag here, and could be tagged across multiple categories. Tags were not applied systematically. Displayed below are things that 3 or more people brought up. For example quotes, search the tag names in the Tagged-Quotes document.

response total_participants
mention.paperclips 6
too.much.focus.on.benchmarks.or.performance 6
media.intuitions 5
too.much.focus.on.benchmarks.or.performance.need.more.focus.on.ethics 5
blame.or.fault 4
power.too.centralized 4
says.we.need.more.emphasis.on.safety.or.ethics.late.in.conversation 3
racing 3
dual.use 3

Follow-up ?s

On 7/29/22 (interviews took place in Feb-early March 2022, so about 5-6 months after), 86/97 participants were emailed and sent the following three Y/N questions. (The last set of 11 participants had agreed that their anonymized transcripts may be shared prior to the initial interview, so weren’t recontacted for follow-up questions.)

  1. [ Y / N ] I consent to sharing my anonymized transcript publicly.

  2. [ Y / N ] Did the interview have a lasting effect on your beliefs?

  3. [ Y / N ] Did the interview cause you to take any new actions in your work?

82/86 participants responded to the email or the reminder email.

  1. 72/82 = 88% responded Y

  2. 42/82 = 51% responded Y

  3. 12/82 = 15% responded Y

Lasting Effects

“Did the interview have a lasting effect on your beliefs?”

Responses present for 82/86 (95%) emailed participants.
Of the participants, 42 (51%) said yes.

Split by: When will we get AGI?

The proportions below exclude people who did not answer the new-actions question (or had NA values). So, for all the people in the ‘wide range’ category for whom we have an answer for the new-actions question (which is 16 total participants), 62% of them said ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: Those whose time horizon was >200 were less likely to say the interview had a lasting effect on their beliefs.

Split by: Alignment problem

The proportions below exclude people who did not answer the lasting-effects question (or had NA values). So, for all the people in the ‘valid’ category for whom we have an answer for the lasting-effects question (which is 47 total participants), 45% of them said Yes. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: Those who did not say that the alignment argument was valid in the interview were more likely to say in the follow-up that the interview had a lasting effect on their beliefs. This seems to indicate that even unconvinced participants found the discussion intellectually interesting.

Split by: Instrumental incentives

The proportions below exclude people who did not answer the lasting-effects question (or had NA values). So, for all the people in the ‘valid’ category for whom we have an answer for the lasting-effects question (which is 46 total participants), 54% of them said ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: The observation above regarding alignment-validity predicting lasting effects did not hold for instrumental-validity.

Split by: Align+Instrum Combo

The proportions below exclude people who did not answer the lasting-effects question (or had NA values). So, for all the people in the ‘valid’ category for whom we have an answer for the lasting-effects question (which is 30 total participants), 47% of them said ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: Those marked as ‘invalid’ for this metric (i.e. who did not say ‘valid’ for both questions) were more likely to say in the follow-up that the interview had a lasting effect on their beliefs. Given the broken-down results above, this seems to be driven by the alignment question.

Split by: Work on this

The proportions below exclude people who did not answer the lasting-effects question (or had NA values). So, for all the people in the ‘No’ category for whom we have an answer for the lasting-effects question (which is 28 total participants), 57% of them said ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: Those who said they were interested in working on this were just as likely to report that the interview had a lasting effect on their beliefs as those who said they were not. People who were already working on AI alignment research (n=3) did not say that this interview had a lasting effect on their beliefs, but that’s not very surprising since they’d likely thought about the interview content previously.

Split by: Did you change your mind?

The proportions below exclude people who did not answer the lasting-effects question (or had NA values). So, for all the people in the ‘No’ category for whom we have an answer for the lasting-effects question (which is 19 total participants), 37% of them said ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: Those who said in the interview that they changed their minds were more likely to report in the follow-up that the interview had a lasting effect on their beliefs. Also, something seen here but good to keep in mind generally is that non-response is likely not randomly distributed along some of these axes. Note that those who said ‘Yes’ to changing their minds during the interview seemed like they were more likely to even respond to the follow-up questions (see first plot in this section); though this is hard to interpret directly since only 4/86 people didn’t respond to the email asking the follow-up questions, and 11 people weren’t asked the follow-up questions and were marked as None/NA automatically.

New Actions

“Did the interview cause you to take any new actions in your work?”

Responses present for 82/86 (95%) emailed participants.
Of the participants, 12 (15%) said yes.

Split by: When will we get AGI?

The proportions below exclude people who did not answer the new-actions question (or had NA values). So, for all the people in the ‘wide range’ category for whom we have an answer for the new-actions question (which is 16 total participants), 6% of them said ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: Similar to the Lasting-effects follow-up question, those whose time horizon was >200 were less likely to say the interview caused them to take any new actions at work, which one might expect, but also note that some proportion of people who said ‘Won’t happen’ also said ‘Yes’ to taking new actions at work.

Split by: Alignment problem

The proportions below exclude people who did not answer the new-actions question (or had NA values). So, for all the people in the ‘valid’ category for whom we have an answer for the new-actions question (which is 47 total participants), 19% of them said ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: Those who thought the argument was invalid were less likely to say the interview caused them to take new action at work. Maybe obvious, but a good sanity check considering how few people even said ‘Yes’ to this question. Also note that this is the opposite of the ‘lasting effects’ result, where these ‘invalid’ people were a bit more likely to say the interview had a lasting effect.

Split by: Instrumental incentives

The proportions below exclude people who did not answer the new-actions question (or had NA values). So, for all the people in the ‘valid’ category for whom we have an answer for the new-actions question (which is 46 total participants), 11% of them said ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: Strangely, the people who thought the instrumental incentives argument was valid were the least likely to say the interview caused them to take any new actions at work. Given this and the ‘lasting effects’ result above, I’m curious what kind of people we’re really picking out when sectioning by their response to instrumental incentives. Not sure if there’s a narrative that can be built here or if this is just a heterogenous bunch of people with different reasons to agree with / disagree with to the instrumental argument.

Split by: Align+Instrum Combo

The proportions below exclude people who did not answer the new-actions question (or had NA values). So, for all the people in the ‘valid’ category for whom we have an answer for the new-actions question (which is 30 total participants), 17% of them said ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: Not much of a difference between ‘valid’ and ‘invalid’ here (which isn’t surprising, given that these went in opposite directions for alignment vs. instrumental). Note that although it looks like “None/NA” might stand out, this is really driven by the fact that very few people did not answer the alignment/instrumental questions, so the denominator is small – only one person in this category said the interview caused them to take new action at work.

Split by: Work on this

The proportions below exclude people who did not answer the new-actions question (or had NA values). So, for all the people in the ‘No’ category for whom we have an answer for the new-actions question (which is 28 total participants), 25% of them said ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: too few people to say much, but strangely, all 12 people who said the interview caused them to take new actions at work had either said No or not responded to the work-on-this question.

Some more detailed analysis: None of the people who were tagged “Interested in long-term safety but” (n=13) during the interview later reported taking any new actions in their work. (The “Yes [I’m already working in alignment research] people” (n=3) also didn’t report taking any actions, but this was expected given they were likely already familiar with the interview content.) We might have expected 25% of the “Interested in long-term safety” people who replied to the new action question (n=11) to have answered “yes”, based on the “No” group, so .25x11 = 2.75 people, i.e. 3 people for there to have been basically “no effect”.

Split by: Did you change your mind?

The proportions below exclude people who did not answer the new-actions question (or had NA values). So, for all the people in the ‘No’ category for whom we have an answer for the new-actions question (which is 19 total participants), 0% of them said ‘Yes’. If you are using the interactive version (rather than the static version) of this report, hover over a bar to see the total participants in that category.

Observation: No one who said ‘No’ when asked if they changed their mind during the interview went on to say ‘Yes’ about the interview causing them to take new actions in their work. Not too surprising.

Correlation Matrices

Notes about variables used in the matrices below:

  • If someone had no answer in a category, they were encoded as a missing value.
  • h-index: The outlier was not removed because we used Spearman correlations, which are rank-ordered and therefore robust to outliers.
  • professionalrank_ord: Professional rank was broken into 4 levels for the ordinal analysis.
    • Level 4 = “Undergraduate”, “Masters”
    • Level 3 = “PhD”, “Research Engineer”, “Software Engineer”, “ML Engineer”, “Researcher / Masters”, “Technical Staff”, “Principal Research Staff Member”, “Architect”, “Research Manager”, “Research Staff”, “Data Scientist”
    • Level 2 = “Postdoc”, “Assistant Professor”, “Research Scientist”, “Physics Fellow Researcher”, “AI Research Resident”
    • Level 1 = “Professor”, “Senior Research Fellow”, “Senior Research Scientist”, “Associate Professor”, “CEO”, “Principal AI Scientist”, “Chief Architect”, “Head of Research”
  • university / industry size ranked: Note that values only exist for a person who falls into the associated sector (i.e. industry and academic rank are split into different variables).
  • indust_size_ranked: This was converted into a rank-ordered variable (i.e. “under10_employees”=1, 10-100_employees=2, and so on); the single instance of “50k+_employees / under10_employees” was removed.
  • AGI_willhappen: wonthappen=0, both=1, willhappen=1
  • alignment_valid: invalid.other=0, valid=1 (see categorization: if someone ever said valid, then they were tagged as valid)
  • instrumental_valid: invalid=0, valid=1 (see categorization: if someone ever said valid, then they were tagged as valid)
  • workon_interestedOrYes: no=0, “Interested in long-term safety but”=1, yes=1 (see categorization)
  • heardofAIsafety: no=0, yes=1
  • heardofAIalignment: no=0, yes=1
  • chgmind: no=0, ambiguous=0, yes=1
  • lastingeffects_yes: no=0, yes=1
  • newactions_yes: no=0, yes=1
  • align_instrum_bothValid: alignment_valid=1 and instrumental_valid=1
  • align_instrum_AGI_allValid: alignment_valid=1 and instrumental_valid=1 and AGI_willhappen=1
  • heardofsafetyandalignment: heardofsafety=1 and heardofalignment=1

Demographics X Main ?s

Using Field1 Labels

Below are the top 20 Spearman correlations in the matrix above, by ρ values.

Var1 Var2 rho n p
91 align_instrum_bothValid university_ranking_overall -0.3736683 67 0.0018413
60 align_instrum_AGI_allValid university_ranking_overall -0.3329199 67 0.0059091
231 heardofsafetyandalignment indust_size_ranked -0.3140384 27 0.1106596
14 AGI_willhappen indust_size_ranked -0.2941688 25 0.1534788
216 heardofAIsafety vision -0.2881782 96 0.0044101
122 alignment_valid university_ranking_overall -0.2816389 70 0.0181786
366 workon_interestedOrYes systems.or.computing 0.2742945 51 0.0514353
176 heardofAIalignment professionalrank_ord -0.2697567 96 0.0078628
29 AGI_willhappen university_ranking_overall -0.2679489 71 0.0238719
154 chgmind vision 0.2652421 58 0.0441936
224 heardofsafetyandalignment Current.country.of.work_UK 0.2574713 96 0.0113253
240 heardofsafetyandalignment RL 0.2560997 96 0.0117844
238 heardofsafetyandalignment professionalrank_ord -0.2523710 96 0.0131153
162 heardofAIalignment Current.country.of.work_UK 0.2505669 96 0.0138049
178 heardofAIalignment RL 0.2456769 96 0.0158348
209 heardofAIsafety RL 0.2438431 96 0.0166600
169 heardofAIalignment indust_size_ranked -0.2432481 27 0.2214762
277 instrumental_valid university_ranking_overall -0.2416961 68 0.0470698
31 AGI_willhappen yrs_since_phd 0.2355512 78 0.0378890
4 AGI_willhappen Current.country.of.work_Asia -0.2291198 94 0.0263296

Also including all correlations with p<0.05 (interpret with caution, of course. Wouldn’t really be fair to call them significant, given multiple comparisons).

Var1 Var2 rho n p
91 align_instrum_bothValid university_ranking_overall -0.3736683 67 0.0018413
216 heardofAIsafety vision -0.2881782 96 0.0044101
60 align_instrum_AGI_allValid university_ranking_overall -0.3329199 67 0.0059091
176 heardofAIalignment professionalrank_ord -0.2697567 96 0.0078628
224 heardofsafetyandalignment Current.country.of.work_UK 0.2574713 96 0.0113253
240 heardofsafetyandalignment RL 0.2560997 96 0.0117844
238 heardofsafetyandalignment professionalrank_ord -0.2523710 96 0.0131153
162 heardofAIalignment Current.country.of.work_UK 0.2505669 96 0.0138049
178 heardofAIalignment RL 0.2456769 96 0.0158348
209 heardofAIsafety RL 0.2438431 96 0.0166600
122 alignment_valid university_ranking_overall -0.2816389 70 0.0181786
29 AGI_willhappen university_ranking_overall -0.2679489 71 0.0238719
4 AGI_willhappen Current.country.of.work_Asia -0.2291198 94 0.0263296
31 AGI_willhappen yrs_since_phd 0.2355512 78 0.0378890
182 heardofAIalignment uncategorized.ML -0.2094637 96 0.0405391
154 chgmind vision 0.2652421 58 0.0441936
277 instrumental_valid university_ranking_overall -0.2416961 68 0.0470698

A visualization of the correlations that were the most ‘significant’ (highest few on table above)

Observations:

  • People in vision are unlikely to have heard of AI safety while people in RL are more likely to have heard of it.
  • Researchers later in their careers (i.e. of higher rank), those in the UK, and those in RL are more likely to have heard of AI alignment.
  • People who think the alignment and instrumental arguments are valid tend to be at better ranked universities (i.e. lower ranking) than those who don’t.

Using Field2 Labels

Below are the top 20 Spearman correlations in the matrix above, by ρ values.

Var1 Var2 rho n p
229 heardofsafetyandalignment Inference 0.3888363 96 0.0000904
169 heardofAIalignment Inference 0.3805622 96 0.0001308
89 align_instrum_bothValid university_ranking_overall -0.3736683 67 0.0018413
304 newactions_yes Computational.Neuro.or.Bio 0.3674265 82 0.0006845
59 align_instrum_AGI_allValid university_ranking_overall -0.3329199 67 0.0059091
5 AGI_willhappen Computer.Vision -0.3255055 94 0.0013680
227 heardofsafetyandalignment indust_size_ranked -0.3140384 27 0.1106596
142 chgmind NLP -0.3112649 58 0.0173912
235 heardofsafetyandalignment Reinforcement.Learning 0.3068228 96 0.0023615
175 heardofAIalignment Reinforcement.Learning 0.2948174 96 0.0035470
17 AGI_willhappen indust_size_ranked -0.2941688 25 0.1534788
182 heardofAIsafety Applications.or.Data.Analysis -0.2889918 96 0.0042951
119 alignment_valid university_ranking_overall -0.2816389 70 0.0181786
205 heardofAIsafety Reinforcement.Learning 0.2797072 96 0.0057805
352 workon_interestedOrYes NLP 0.2777460 51 0.0484561
174 heardofAIalignment professionalrank_ord -0.2697567 96 0.0078628
29 AGI_willhappen university_ranking_overall -0.2679489 71 0.0238719
2 AGI_willhappen Applications.or.Data.Analysis 0.2610223 94 0.0110517
220 heardofsafetyandalignment Current.country.of.work_UK 0.2574713 96 0.0113253
234 heardofsafetyandalignment professionalrank_ord -0.2523710 96 0.0131153

Also including all correlations with p<0.05 (interpret with caution, of course. Wouldn’t really be fair to call them significant, given multiple comparisons).

Var1 Var2 rho n p
229 heardofsafetyandalignment Inference 0.3888363 96 0.0000904
169 heardofAIalignment Inference 0.3805622 96 0.0001308
304 newactions_yes Computational.Neuro.or.Bio 0.3674265 82 0.0006845
5 AGI_willhappen Computer.Vision -0.3255055 94 0.0013680
89 align_instrum_bothValid university_ranking_overall -0.3736683 67 0.0018413
235 heardofsafetyandalignment Reinforcement.Learning 0.3068228 96 0.0023615
175 heardofAIalignment Reinforcement.Learning 0.2948174 96 0.0035470
182 heardofAIsafety Applications.or.Data.Analysis -0.2889918 96 0.0042951
205 heardofAIsafety Reinforcement.Learning 0.2797072 96 0.0057805
59 align_instrum_AGI_allValid university_ranking_overall -0.3329199 67 0.0059091
174 heardofAIalignment professionalrank_ord -0.2697567 96 0.0078628
2 AGI_willhappen Applications.or.Data.Analysis 0.2610223 94 0.0110517
220 heardofsafetyandalignment Current.country.of.work_UK 0.2574713 96 0.0113253
234 heardofsafetyandalignment professionalrank_ord -0.2523710 96 0.0131153
160 heardofAIalignment Current.country.of.work_UK 0.2505669 96 0.0138049
142 chgmind NLP -0.3112649 58 0.0173912
119 alignment_valid university_ranking_overall -0.2816389 70 0.0181786
29 AGI_willhappen university_ranking_overall -0.2679489 71 0.0238719
111 alignment_valid Near.term.Safety.and.Related 0.2312672 95 0.0241374
276 lastingeffects_yes Computing 0.2486824 82 0.0242679
7 AGI_willhappen Current.country.of.work_Asia -0.2291198 94 0.0263296
30 AGI_willhappen yrs_since_phd 0.2355512 78 0.0378890
173 heardofAIalignment Optimization -0.2109789 96 0.0390774
233 heardofsafetyandalignment Optimization -0.2033902 96 0.0468636
269 instrumental_valid university_ranking_overall -0.2416961 68 0.0470698
352 workon_interestedOrYes NLP 0.2777460 51 0.0484561

Let’s visualize the top handful of correlations that were the most ‘significant’ (highest few on table above)

Observations:

  • Of the p < 0.01 correlations, most are related to specific fields of AI. The remainder are:
  • Thinking both the alignment problem + instrumental incentives arguments were valid (and also optionally thinking AGI would happen) correlated with being in a higher-ranked university (negative correlation with university ranking).
  • Having heard of AI alignment was correlated with being in a more senior position (negative correlation with professional rank).

Main ?s X Main ?s

Below are all Spearman correlations with p < 0.1.

Var1 Var2 rho n p
40 heardofAIsafety heardofAIalignment 0.4105489 96 0.0000326
25 chgmind lastingeffects_yes 0.4851420 50 0.0003559
2 AGI_willhappen alignment_valid 0.2819726 92 0.0064673
6 AGI_willhappen instrumental_valid 0.2650531 89 0.0120654
35 heardofAIalignment newactions_yes -0.2379919 81 0.0323969
80 workon_interestedOrYes newactions_yes -0.3091735 41 0.0491899
14 alignment_valid heardofAIsafety 0.1888619 95 0.0668083
3 AGI_willhappen chgmind 0.2422859 57 0.0693929
17 alignment_valid newactions_yes 0.1870980 80 0.0965472

Let’s visualize the top handful of correlations (I plotted each two ways, with x and y switched, to get the full picture).

Observations:

  • If you’ve heard of AI alignment, you’ve heard of AI safety. (This is in fact almost definitionally so; the taggers basically did not tag someone as knowing what AI alignment was without knowing what AI safety was, since alignment is a subfield of safety.) If you’ve heard of AI safety, there’s about a 50/50 shot you’ve heard of AI alignment as well, but if you haven’t heard of AI safety, you’re very unlikely to have heard of AI alignment.
  • People who said that they changed their minds during the interview were more likely to report later that the interview had a lasting effect on their beliefs. Similarly, if they said they didn’t change their mind, they were less likely to report a lasting effect.
  • In this data, if you think the alignment argument is valid, you probably think AGI will happen. If you think AGI will happen, you’re definitely more likely to think the alignment problem argument is valid but it’s not a given. It’s almost like thinking that AGI will happen is a prerequisite for thinking the alignment problem argument is valid. Similar trends hold true for the instrumental incentives argument.
  • Almost all of the people who said the interview caused them to take new action(s) at work had never heard of AI alignment.
  • None of the people who reported that the interview caused them to take a new action(s) at work had said they were interested in working on AI alignment research during the interview; see more detail here.