Home / Questions

Existential Safety
Provocative Questions

See a downloadable Google Document version of these questions.

Risks

What would Adolf Hitler do with a jailbroken ChatGPT-6?

What would Adolf Hitler do with a jailbroken ChatGPT-6?

What would Adolf Hitler do with 100 million AI slaughterbots? (This is misleading on purpose. He would only need one to take over the world. It would self-replicate as needed.)

What would Adolf Hitler do with 100 million superintelligent slaughterbots?

(This is misleading on purpose. He would only need one to take over the world. It would self-replicate as needed.)

If Adolf Hitler were alive today, would he prefer to have 100 Tsar nuclear bombs or one superintelligent AI? (One superintelligent AI could produce weapons of mass destruction far more powerful than Tsar bombs and weapons of selective destruction for targeted attacks.)

What kind of harm could the world’s ~248M online individuals with psychopathy create with a jailbroken ChatGPT-6? If 1% chose to commit harm, how could we plausibly stop all ~2.5M of them?

How can we ensure an advanced AI is never jailbroken when billions of people will have an extraordinary incentive to do so (e.g., immense money, fame, power, longevity, safety, etc.)? (This is misleading on purpose. We can't with any known approaches. We’d have to invent 100% effective technological or social control mechanisms. 20+ years of work on this have failed to produce anything viable.)

For the first time in history there can now be millions of AI-augmented bioterrorists and reckless biological experimenters. What specifically can the UN, major superpowers, leading AI companies or the general public do to prevent a mass casualty incident from a bioterrorist using jailbroken frontier AI models? Or from a lab leak? When will they do it? Can they be 100% successful at their attempts for the indefinite future? Does this not mean a new pandemic (or multipandemic) is nearly inevitable?

What would you personally do with 100 million superintelligent agents working 24/7/365 on your behalf? Would your army's actions be of benefit to all life on Earth? Would any of Earth's millions of species potentially be worse off? Can you know for sure what you ought to do or not do with this godlike power?

If you had 100 million superintelligent agents working for you, could anyone in the world stop you from enacting your desired changes? Could anyone stop your neighbor from enacting their desired changes? Or Ted Kaczynski enacting his desired changes? Or Osama bin Laden? Or a yet-to-be-discovered terrorist leader? Would wars between swarms of superintelligent agents fighting for different values be desirable?

Would you rather have an organic alien superintelligence or artificial superintelligence in your backyard? (This is misleading on purpose. The answer is neither unless you are absolutely certain they would be aligned with your values.)

If digital aliens from Alpha Centauri beamed into our computing systems tomorrow, how would humanity control them or our computing systems? How could we stop the Alpha Centaurians from doing anything they wanted? (Replace Alpha Centaurians with an agentic, self-directed ChatGPT-6 and ask the same question.)

Do you think the leaders of the AI safety field–including Nobel laureates and others that actually invented much of the AI we’re now trying to understand–are wrong about the risks? If so, why are they wrong and you are right?

Do you similarly disagree with international nuclear safety agencies made up of nuclear safety experts who set safety standards for nuclear power plants? Or with standard settings bodies made up of experienced engineers who set safety standards for bridge design? Or with international health agencies made up of experienced medical directors who set safety standards for brain surgery procedures?

Are you so confident that there aren’t real risks or they can be easily and quickly mitigated that you’re willing to risk your life, your family’s lives, everyone else’s lives, all non-human lives, and all potential future lives on you being right?

Prioritization

Where does mitigating existential or catastrophic risks from AI rank on your list of daily priorities?

What would you need to see to make mitigating existential or catastrophic risks from AI the number one daily priority for you? 

If you were an adult during the 1940s when nuclear apocalypse was becoming an unmistakable risk, what would you have needed to see to make mitigating existential or catastrophic risks from nuclear weapons the number one daily priority for you?

Do you consider the 33-50% odds of a nuclear holocaust during the 1962 Cuban Missile Crisis—according to United States President John F. Kennedy—was an acceptable risk for humanity to have borne for the benefits of having nuclear weapons available?

What would you have done to help prevent the nuclear holocaust that almost occurred during the Cuban Missile Crisis, if you had the opportunity? Or what would you have done to help prevent any of the dozens of other risky scenarios that have occurred over the decades? Are you helping prevent future catastrophes now through your time, dollars or vote?

Would you personally have done anything to prevent that 1962 near holocaust from occurring if you were alive at the time? Or would you have helped prevent any of the dozens of others that have occurred over the decades?

Genuinely how appreciative are you for the heroic work of the nuclear safety experts, policymakers, philanthropists, civil society organization leaders, and citizen advocates you almost certainly owe your life to today? Have you ever reached out to one to thank them? Or donate, if applicable? Could you enjoy your life today if they had not dedicated themselves to your existential safety yesterday?

Genuinely how appreciative are you for the heroic work of the nuclear safety experts, policymakers, philanthropists, civil society organization leaders, and citizen advocates you owe your life to today? Have you ever reached out to one to thank them? Or donate, if applicable? Could you do your work today if they had not done theirs?

Can you achieve anything else you value if humanity destroys itself with AI in the next few years? 

Do you have anything more valuable to do than trying to mitigate existential or catastrophic risk from AI now? Or if not AI, then from nuclear holocaust or an engineered pandemic?

Do you have a written COVID-2X plan, complete with equipment, supplies, and step-by-step actions you will take when a major pandemic hits again? If no, why not? See template plan. Free guides like this have been online well before COVID-19. Prediction markets estimate ~31% of another pandemic by 2030 and ~74% chance by 2040.

What exactly would you need to see in order to begin seriously advocating for pandemic preparedness and allocating at least some of your discretionary time and money to prevent a COVID-2X or similar? Was COVID-19's 27 million excess deaths and trillions in economic damage not sufficient justification?

Governance

We’ve known about the devastation that machine intelligence would likely inflict on humanity since 1863. In 1951, amidst the digital computer revolution, Alan Turing cogently made this risk clear. The United Nations only recognized these risks in 2023, but neither they nor any nation has yet to declare mitigating these risks as their number one priority. Why has our leadership done so little to prevent these clear risks?

We’ve known about the devastation that climate change would likely inflict on humanity since 1896. Why has our leadership done so little to prevent these clear risks?

We’ve known about the devastation that nuclear weapons would likely inflict on humanity since 1933. There are still 12,000+ nuclear weapons in the world today, including many that are literally missing. Why has our leadership done so little to prevent these clear risks? 

We’ve known about the devastation that pandemics can cause on humanity since before 430 B.C. Why has our leadership done so little to prevent these clear risks?

If a planet killer-sized asteroid were on a collision course with Earth within the same timeframe that is expected with artificial general intelligence—roughly one to five years according to some estimates—what odds would you put on humanity developing the geopolitical coordination and technological know-how necessary to neutralize the asteroid in time?

What level of extinction risk should a new commercial product be allowed to impose on humanity? Any?

Should any person or company be able to control an army of millions of robots? Is there any viable way to stop a person or company from building this army given the continual decline of manufacturing costs and increase in access to production designs?

Should any person or company be able to secretly administer an experimental, dangerous serum on billions of people without their explicit consent? What about an experimental, dangerous digital serum like AI?

The founders of frontier AI companies have known about the existential risks their AIs would pose to humanity since before they invented them. The "AI suicide race" that we're now in has been predicted since at least 1965. Why did AI leaders not use their time, talent, and money to prevent these risks by building or improving AI safety-focused global governance organizations before leading existentially risky research into frontier AI? For example, by donating to the United Nations for AI safety? For reference, in 1997 Ted Turner donated $1B to the United Nations for international peace and human dignity, calling it his best ever investment.

The founders of frontier AI companies have known about the obvious existential risks their AIs would pose humanity since before they invented them. Why did they not use their extraordinary talents and money to prevent these risks by building or improving scientific and ethics-based governing institutions before they started this "AI suicide race"? Why did they not give $1B to the United Nations for AI safety, like Ted Turner did in 1997 to improve human dignity?

Why aren't AI leaders consenting to the 100K+ person call for a prohibition on artificial superintelligence until there is "broad scientific consensus that it will be done safely and controllably, and with strong public buy-in"? If they're claiming they worry about their competitors racing ahead, why don't they agree to conditionally pause their riskiest work if others do so as well? And then dedicate their time, talent, and money to develop global governance of AI first?

The founders of frontier AI companies have known about the obvious existential risks their AIs would pose humanity since before they invented them. Why did they not use their extraordinary talents and money to prevent these risks by building or improving scientific and ethics-based governing institutions before they started this "AI suicide race"? Why did they not give $1B to the United Nations for AI safety, like Ted Turner did in 1997 to improve human dignity?

Call to Action

If you're able, will you consider making mitigating existential or catastrophic risks from AI the number one daily priority for you going forward until we’ve collectively eliminated most of the risks? If no, why not? If you are extremely busy, can you commit to one small action per day? Many can be done in seconds or minutes with near zero financial cost.

If not you, then who? (The author has been in the space for 25+ years and can assure you there are nowhere near enough people actually fighting to save our species from extinction.)

#ExistentialSafety

We all deserve existential safety.

hello@existentialsafety.org · © 2024 · All rights reserved

#ExistentialSafety

We all deserve existential safety.

hello@existentialsafety.org · © 2024 · All rights reserved

#ExistentialSafety

We all deserve existential safety.