In a bold and unprecedented move, hundreds of leading figures from the global tech and science community — including Apple co-founder Steve Wozniak and Virgin Group founder Richard Branson — have signed an open letter urging a global ban on the development of superintelligent artificial intelligence (AI) until its safety and controllability are scientifically verified. The call, published by the Future of Life Institute (FLI), reignites global debates about the ethical and existential risks of AI advancement.
The document, titled “Statement on Superintelligence,” gathers signatures from some of the most respected minds in the field, including two of AI’s “godfathers,” Geoffrey Hinton and Yoshua Bengio. Other high-profile supporters include former U.S. military chief Mike Mullen, former Trump strategist Steve Bannon, and even Prince Harry and Meghan Markle, who have publicly expressed concerns about AI’s impact on society. The letter emphasizes that AI development must not outpace humanity’s ability to ensure its safety, warning that a failure to regulate could lead to irreversible consequences.
Interestingly, some of the most powerful figures in the AI industry have not supported the initiative. Among those who declined to sign are OpenAI CEO Sam Altman, DeepMind co-founder Mustafa Suleyman (now leading AI at Microsoft), Anthropic CEO Dario Amodei, and xAI founder Elon Musk — although Musk had previously endorsed a 2023 FLI letter advocating for a temporary pause in AI model training beyond GPT-4. Their absence highlights a growing divide between AI developers racing to innovate and those urging for restraint and safety-first principles.
According to a recent FLI survey, only 5% of Americans support rapid, unregulated AI development, while over 73% favor strict regulation. Even more strikingly, 64% believe that superintelligent AI — systems surpassing human cognitive ability — should be banned until safety is proven. This overwhelming public sentiment underscores the urgency for governments and tech companies to align on transparent safety standards, before AI reaches a point where it becomes uncontrollable or misaligned with human values.
Critics of unchecked AI growth warn that superintelligent systems could pose severe risks, including mass unemployment, economic destabilization, and even threats to civil liberties and national security. The FLI’s open letter serves as a global wake-up call, pushing the tech industry to balance innovation with ethical responsibility.
In the letter, Anthony Aguirre, co-founder of the Future of Life Institute, concluded: “Many dream of powerful AI tools to advance science, medicine, and human potential. But the corporate race to create AI that surpasses human intelligence — and replaces humans entirely — is not what society wants.”
Conclusion:
As the AI race accelerates, the world stands at a crossroads between technological progress and existential risk. The call from global leaders signals a growing demand for AI governance rooted in ethics, transparency, and safety. Until humanity can prove that superintelligent AI can be controlled, halting its development may be the only way to ensure a future where humans remain at the center of technological evolution.





