About
AI Safety Papers
Opportunities
Opportunities
Connections
Request a Call
AI RISK INTERVIEW PERSPECTIVES
Introduction
Generally Capable AI
Within 50 years
More than 50 years
Why these systems might come soon
Never
Biology is special
Seems weird
No true creativity
Understand the brain first
We wouldn’t want that
Can’t see it based on current progress
Need AI paradigm shift
People would stop this
We need embodiment
AI cannot be conscious
The Alignment Problem
Agree
Disagree
Test before deploying
Be careful with reward function
Alignment is easy
Alignment will progress automatically
Need to know what type of AGI
Misuse is a bigger problem
Other global risks are more dangerous
Humans have alignment problems too
Instrumental Incentives
Agree
Disagree
Consciousness required
Stop it physically
Current systems don’t do that
Wouldn’t design it that way
Human oversight
Threat Models
Agree
Disagree
Pursuing Safety Work
Agree
Disagree
Policymakers will resolve this
Work not useful currently
Work not urgent currently
Conclusion