Home / Questions
Existential Safety
Provocative Questions
See a downloadable Google Document version of these questions.
Risks
●
What would Adolf Hitler do with a jailbroken ChatGPT-5?
●
What would Adolf Hitler do with a jailbroken ChatGPT-5?
●
What would Adolf Hitler do with a jailbroken ChatGPT-5?
●
What would Adolf Hitler do with 100 million slaughterbots? (This is misleading on purpose. He would only need one to take over the world. It would self-replicate as needed.)
●
What would Adolf Hitler do with 100 million superintelligent slaughterbots?
(This is misleading on purpose. He would only need one to take over the world. It would self-replicate as needed.)
●
What would Adolf Hitler do with 100 million slaughterbots? (This is misleading on purpose. He would only need one to take over the world. It would self-replicate as needed.)
●
If Adolf Hitler were alive today, would he prefer to have 100 Tsar nuclear bombs or one superintelligent AI? (One superintelligent AI could produce weapons of mass destruction far more powerful than Tsar bombs and weapons of selective destruction for targeted attacks.)
●
If Adolf Hitler were alive today, would he prefer to have 100 Tsar nuclear bombs or one superintelligent AI? (One superintelligent AI could produce weapons of mass destruction far more powerful than Tsar bombs and weapons of selective destruction for targeted attacks.)
●
If Adolf Hitler were alive today, would he prefer to have 100 Tsar nuclear bombs or one superintelligent AI? (One superintelligent AI could produce weapons of mass destruction far more powerful than Tsar bombs and weapons of selective destruction for targeted attacks.)
●
What kind of harm could the world’s ~248M online individuals with psychopathy create with a jailbroken ChatGPT-5? If 1% chose to commit harm, how could we plausibly stop all ~2.5M of them?
●
What kind of harm could the world’s ~248M online individuals with psychopathy create with a jailbroken ChatGPT-5? If 1% chose to commit harm, how could we plausibly stop all ~2.5M of them?
●
What kind of harm could the world’s ~248M online individuals with psychopathy create with a jailbroken ChatGPT-5? If 1% chose to commit harm, how could we plausibly stop all ~2.5M of them?
●
How can we ensure an advanced AI is never jailbroken when billions of people will have an extraordinary incentive to do so (e.g., immense money, fame, power, longevity, safety, etc.)? (This is misleading on purpose. We can't with any known approaches. We’d have to invent 100% effective technological or social control mechanisms. 20+ years of work on this have failed to produce anything viable.)
●
How can we ensure an advanced AI is never jailbroken when billions of people will have an extraordinary incentive to do so (e.g., immense money, fame, power, longevity, safety, etc.)? (This is misleading on purpose. We can't with any known approaches. We’d have to invent 100% effective technological or social control mechanisms. 20+ years of work on this have failed to produce anything viable.)
●
How can we ensure an advanced AI is never jailbroken when billions of people will have an extraordinary incentive to do so (e.g., immense money, fame, power, longevity, safety, etc.)? (This is misleading on purpose. We can't with any known approaches. We’d have to invent 100% effective technological or social control mechanisms. 20+ years of work on this have failed to produce anything viable.)
●
What would you personally do with 100 million superintelligent agents working 24/7/365 on your behalf? Would your army's actions be of benefit to all life on Earth? Would any of Earth's millions of species potentially be worse off? Can you know for sure what you ought to do or not do with this godlike power?
●
What would you personally do with 100 million superintelligent agents working 24/7/365 on your behalf? Would your army's actions be of benefit to all life on Earth? Would any of Earth's millions of species potentially be worse off? Can you know for sure what you ought to do or not do with this godlike power?
●
What would you personally do with 100 million superintelligent agents working 24/7/365 on your behalf? Would your army's actions be of benefit to all life on Earth? Would any of Earth's millions of species potentially be worse off? Can you know for sure what you ought to do or not do with this godlike power?
●
If you had 100 million superintelligent agents working for you, could anyone in the world stop you from enacting your desired changes? Could anyone stop your neighbor from enacting their desired changes? Or Ted Kaczynski enacting his desired changes? Or Osama bin Laden? Or a yet-to-be-discovered terrorist leader? Would wars between swarms of superintelligent agents fighting for different values be desirable?
●
If you had 100 million superintelligent agents working for you, could anyone in the world stop you from enacting your desired changes? Could anyone stop your neighbor from enacting their desired changes? Or Ted Kaczynski enacting his desired changes? Or Osama bin Laden? Or a yet-to-be-discovered terrorist leader? Would wars between swarms of superintelligent agents fighting for different values be desirable?
●
If you had 100 million superintelligent agents working for you, could anyone in the world stop you from enacting your desired changes? Could anyone stop your neighbor from enacting their desired changes? Or Ted Kaczynski enacting his desired changes? Or Osama bin Laden? Or a yet-to-be-discovered terrorist leader? Would wars between swarms of superintelligent agents fighting for different values be desirable?
●
Would you rather have an organic alien superintelligence or artificial superintelligence in your backyard? (This is misleading on purpose. The answer is neither unless you are absolutely certain they would be aligned with your values.)
●
Would you rather have an organic alien superintelligence or artificial superintelligence in your backyard? (This is misleading on purpose. The answer is neither unless you are absolutely certain they would be aligned with your values.)
●
Would you rather have an organic alien superintelligence or artificial superintelligence in your backyard? (This is misleading on purpose. The answer is neither unless you are absolutely certain they would be aligned with your values.)
●
If digital aliens from Alpha Centauri beamed into our computing systems tomorrow, how would humanity control them or our computing systems? How could we stop the Alpha Centaurians from doing anything they wanted? (Replace Alpha Centaurians with an agentic, self-directed ChatGPT-5 and ask the same question.)
●
If digital aliens from Alpha Centauri beamed into our computing systems tomorrow, how would humanity control them or our computing systems? How could we stop the Alpha Centaurians from doing anything they wanted? (Replace Alpha Centaurians with an agentic, self-directed ChatGPT-5 and ask the same question.)
●
If digital aliens from Alpha Centauri beamed into our computing systems tomorrow, how would humanity control them or our computing systems? How could we stop the Alpha Centaurians from doing anything they wanted? (Replace Alpha Centaurians with an agentic, self-directed ChatGPT-5 and ask the same question.)
●
Do you think the leaders of the AI safety field–including Nobel laureates and others that actually invented much of the AI we’re now trying to understand–are wrong about the risks? If so, why are they wrong and you are right?
●
Do you think the leaders of the AI safety field–including Nobel laureates and others that actually invented much of the AI we’re now trying to understand–are wrong about the risks? If so, why are they wrong and you are right?
●
Do you think the leaders of the AI safety field–including Nobel laureates and others that actually invented much of the AI we’re now trying to understand–are wrong about the risks? If so, why are they wrong and you are right?
●
Do you similarly disagree with international nuclear safety agencies made up of nuclear safety experts who set safety standards for nuclear power plants? Or with standard settings bodies made up of experienced engineers who set safety standards for bridge design? Or with international health agencies made up of experienced medical directors who set safety standards for brain surgery procedures?
●
Do you similarly disagree with international nuclear safety agencies made up of nuclear safety experts who set safety standards for nuclear power plants? Or with standard settings bodies made up of experienced engineers who set safety standards for bridge design? Or with international health agencies made up of experienced medical directors who set safety standards for brain surgery procedures?
●
Do you similarly disagree with international nuclear safety agencies made up of nuclear safety experts who set safety standards for nuclear power plants? Or with standard settings bodies made up of experienced engineers who set safety standards for bridge design? Or with international health agencies made up of experienced medical directors who set safety standards for brain surgery procedures?
●
Are you so confident that there aren’t real risks or they can be easily and quickly mitigated that you’re willing to risk your life, your family’s lives, everyone else’s lives, all non-human lives, and all potential future lives on you being right?
●
Are you so confident that there aren’t real risks or they can be easily and quickly mitigated that you’re willing to risk your life, your family’s lives, everyone else’s lives, all non-human lives, and all potential future lives on you being right?
●
Are you so confident that there aren’t real risks or they can be easily and quickly mitigated that you’re willing to risk your life, your family’s lives, everyone else’s lives, all non-human lives, and all potential future lives on you being right?
Prioritization
●
Where does mitigating existential or catastrophic risks from AI rank on your list of daily priorities?
●
Where does mitigating existential or catastrophic risks from AI rank on your list of daily priorities?
●
Where does mitigating existential or catastrophic risks from AI rank on your list of daily priorities?
●
What exactly would you need to see to make mitigating existential or catastrophic risks from AI the number one daily priority for you?
●
What exactly would you need to see to make mitigating existential or catastrophic risks from AI the number one daily priority for you?
●
What exactly would you need to see to make mitigating existential or catastrophic risks from AI the number one daily priority for you?
●
If you were an adult during the 1940s when nuclear apocalypse was becoming an unmistakable risk, what would you have needed to see to make mitigating existential or catastrophic risks from nuclear weapons the number one daily priority for you?
●
If you were an adult during the 1940s when nuclear apocalypse was becoming an unmistakable risk, what would you have needed to see to make mitigating existential or catastrophic risks from nuclear weapons the number one daily priority for you?
●
If you were an adult during the 1940s when nuclear apocalypse was becoming an unmistakable risk, what would you have needed to see to make mitigating existential or catastrophic risks from nuclear weapons the number one daily priority for you?
●
Do you consider the 33-50% odds of a nuclear holocaust during the 1962 Cuban Missile Crisis—according to United States President John F. Kennedy—was an acceptable risk for humanity to have borne for the benefits of having nuclear weapons available?
●
Do you consider the 33-50% odds of a nuclear holocaust during the 1962 Cuban Missile Crisis—according to United States President John F. Kennedy—was an acceptable risk for humanity to have borne for the benefits of having nuclear weapons available?
●
Do you consider the 33-50% odds of a nuclear holocaust during the 1962 Cuban Missile Crisis—according to United States President John F. Kennedy—was an acceptable risk for humanity to have borne for the benefits of having nuclear weapons available?
●
Would you personally have done anything to prevent the Cuban Missile Crisis near holocaust from occurring if you were alive at the time? Or would you have helped prevent any of the dozens of others that have occurred over the decades?
●
Would you personally have done anything to prevent that 1962 near holocaust from occurring if you were alive at the time? Or would you have helped prevent any of the dozens of others that have occurred over the decades?
●
Would you personally have done anything to prevent the Cuban Missile Crisis near holocaust from occurring if you were alive at the time? Or would you have helped prevent any of the dozens of others that have occurred over the decades?
●
Genuinely how appreciative are you for the heroic work of the nuclear safety experts, policymakers, philanthropists, civil society organization leaders, and citizen advocates you owe your life to today? Have you ever reached out to one to thank them? Or donate, if applicable? Could you enjoy your life today if they had not dedicated themselves to your existential safety yesterday?
●
Genuinely how appreciative are you for the heroic work of the nuclear safety experts, policymakers, philanthropists, civil society organization leaders, and citizen advocates you owe your life to today? Have you ever reached out to one to thank them? Or donate, if applicable? Could you do your work today if they had not done theirs?
●
Genuinely how appreciative are you for the heroic work of the nuclear safety experts, policymakers, philanthropists, civil society organization leaders, and citizen advocates you owe your life to today? Have you ever reached out to one to thank them? Or donate, if applicable? Could you enjoy your life today if they had not dedicated themselves to your existential safety yesterday?
●
Do you have anything more important to do than trying to mitigate existential or catastrophic risk from AI now? Or if not AI, then from nuclear holocaust or an engineered pandemic?
●
Do you have anything more important to do than trying to mitigate existential or catastrophic risk from AI now? Or if not AI, then from nuclear holocaust or an engineered pandemic?
●
Do you have anything more important to do than trying to mitigate existential or catastrophic risk from AI now? Or if not AI, then from nuclear holocaust or an engineered pandemic?
●
Can you achieve anything else you value if humanity destroys itself with AI in the next few years?
●
Can you achieve anything else you value if humanity destroys itself with AI in the next few years?
●
Can you achieve anything else you value if humanity destroys itself with AI in the next few years?
●
Do you have a written COVID-2X plan, complete with equipment, supplies, and step-by-step actions you will take when a major pandemic hits again? If no, why not? (See template. Free guides like this have been online well before 2019.)
●
Do you have a written COVID-2X plan, complete with equipment, supplies, and step-by-step actions you will take when a major pandemic hits again? If no, why not? (See template. Free guides like this have been online well before 2019.)
●
Do you have a written COVID-2X plan, complete with equipment, supplies, and step-by-step actions you will take when a major pandemic hits again? If no, why not? (See template. Free guides like this have been online well before 2019.)
●
What exactly would you need to see in order to begin seriously advocating for pandemic preparedness and allocating at least 10% of your discretionary time and money to prevent COVID-2X? Was COVID-19's 27 million excess deaths and trillions in economic damage not enough?
●
What exactly would you need to see in order to begin seriously advocating for pandemic preparedness and allocating at least 10% of your discretionary time and money to prevent COVID-2X? Was COVID-19's 27 million excess deaths and trillions in economic damage not enough?
●
What exactly would you need to see in order to begin seriously advocating for pandemic preparedness and allocating at least 10% of your discretionary time and money to prevent COVID-2X? Was COVID-19's 27 million excess deaths and trillions in economic damage not enough?
Governance
●
We’ve known about the devastation that machine intelligence would likely inflict on humanity since 1863. In 1951, amidst the digital computer revolution, Alan Turing cogently made this risk clear. The United Nations only recognized these risks in 2023, but neither they nor any nation has yet to declare mitigating these risks as their number one priority. Why has our leadership done so little to prevent these clear risks?
●
We’ve known about the devastation that machine intelligence would likely inflict on humanity since 1863. In 1951, amidst the digital computer revolution, Alan Turing cogently made this risk clear. The United Nations only recognized these risks in 2023, but neither they nor any nation has yet to declare mitigating these risks as their number one priority. Why has our leadership done so little to prevent these clear risks?
●
We’ve known about the devastation that machine intelligence would likely inflict on humanity since 1863. In 1951, amidst the digital computer revolution, Alan Turing cogently made this risk clear. The United Nations only recognized these risks in 2023, but neither they nor any nation has yet to declare mitigating these risks as their number one priority. Why has our leadership done so little to prevent these clear risks?
●
We’ve known about the devastation that climate change would likely inflict on humanity since 1896. Why has our leadership done so little to prevent these clear risks?
●
We’ve known about the devastation that climate change would likely inflict on humanity since 1896. Why has our leadership done so little to prevent these clear risks?
●
We’ve known about the devastation that climate change would likely inflict on humanity since 1896. Why has our leadership done so little to prevent these clear risks?
●
We’ve known about the devastation that nuclear weapons would likely inflict on humanity since 1933. There are still 12,000+ nuclear weapons in the world today, including many that are literally missing. Why has our leadership done so little to prevent these clear risks?
●
We’ve known about the devastation that nuclear weapons would likely inflict on humanity since 1933. There are still 12,000+ nuclear weapons in the world today, including many that are literally missing. Why has our leadership done so little to prevent these clear risks?
●
We’ve known about the devastation that nuclear weapons would likely inflict on humanity since 1933. There are still 12,000+ nuclear weapons in the world today, including many that are literally missing. Why has our leadership done so little to prevent these clear risks?
●
We’ve known about the devastation that pandemics can cause on humanity since 430 B.C. Why has our leadership done so little to prevent these clear risks?
●
We’ve known about the devastation that pandemics can cause on humanity since 430 B.C. Why has our leadership done so little to prevent these clear risks?
●
We’ve known about the devastation that pandemics can cause on humanity since 430 B.C. Why has our leadership done so little to prevent these clear risks?
●
If a planet killer-sized asteroid were on a collision course with Earth within the same timeframe that is expected with artificial general intelligence—roughly one to five years according to some estimates—what odds would you put on humanity developing the geopolitical coordination and technological know-how necessary to neutralize the asteroid in time?
●
If a planet killer-sized asteroid were on a collision course with Earth within the same timeframe that is expected with artificial general intelligence—roughly one to five years according to some estimates—what odds would you put on humanity developing the geopolitical coordination and technological know-how necessary to neutralize the asteroid in time?
●
If a planet killer-sized asteroid were on a collision course with Earth within the same timeframe that is expected with artificial general intelligence—roughly one to five years according to some estimates—what odds would you put on humanity developing the geopolitical coordination and technological know-how necessary to neutralize the asteroid in time?
●
What level of extinction risk should a new commercial product be allowed to impose on humanity?
●
What level of extinction risk should a new commercial product be allowed to impose on humanity?
●
What level of extinction risk should a new commercial product be allowed to impose on humanity?
●
Should any person or company be able to control an army of billions of robots? Is there any viable way to stop a person or company from building this army given the continual decline of manufacturing costs and increase in access to production designs?
●
Should any person or company be able to control an army of billions of robots? Is there any viable way to stop a person or company from building this army given the continual decline of manufacturing costs and increase in access to production designs?
●
Should any person or company be able to control an army of billions of robots? Is there any viable way to stop a person or company from building this army given the continual decline of manufacturing costs and increase in access to production designs?
●
Should any person or company be able to secretly administer an experimental, dangerous serum on billions of people without their explicit consent? What about an experimental, dangerous digital serum like AI?
●
Should any person or company be able to secretly administer an experimental, dangerous serum on billions of people without their explicit consent? What about an experimental, dangerous digital serum like AI?
●
Should any person or company be able to secretly administer an experimental, dangerous serum on billions of people without their explicit consent? What about an experimental, dangerous digital serum like AI?
Call to Action
●
If you're able, will you consider making mitigating existential or catastrophic risks from AI the number one daily priority for you going forward until we’ve collectively eliminated most of the risks? If no, why not?
●
If you're able, will you consider making mitigating existential or catastrophic risks from AI the number one daily priority for you going forward until we’ve collectively eliminated most of the risks? If no, why not?
●
If you're able, will you consider making mitigating existential or catastrophic risks from AI the number one daily priority for you going forward until we’ve collectively eliminated most of the risks? If no, why not?
●
If not you, then who? (The author has been in the space for 25+ years and can assure you there are nowhere near enough people actually fighting to save our species.)
●
If not you, then who? (The author has been in the space for 25+ years and can assure you there are nowhere near enough people actually fighting to save our species.)
●
If not you, then who? (The author has been in the space for 25+ years and can assure you there are nowhere near enough people actually fighting to save our species.)
#ExistentialSafety
We all deserve existential safety.
About · Team · Privacy Policy · Terms of Use
hello@existentialsafety.org · © 2024 · All rights reserved
#ExistentialSafety
We all deserve existential safety.
About · Team · Privacy Policy · Terms of Use
hello@existentialsafety.org · © 2024 · All rights reserved
#ExistentialSafety
We all deserve existential safety.
About · Team · Privacy Policy · Terms of Use
hello@existentialsafety.org · © 2024