![]()
Renowned Astrophysicist Calls for Global Ban on Superintelligence Development
In a stark warning that has reignited debates about artificial intelligence safety, astrophysicist Neil deGrasse Tyson has called for an international treaty to ban the development of superintelligence. Speaking in what appears to be a recent interview, Tyson declared that this branch of AI is “lethal” and emphasized the urgent need for global action.
“That branch of AI is lethal. We’ve got do something about that. Nobody should build it. And everyone needs to agree to that by treaty. Treaties are not perfect, but they are the best we have as humans,” Tyson stated emphatically.
While the exact context of these remarks remains somewhat elusive, with the specific interview or talk referenced in Reddit comments proving difficult to locate with certainty, Tyson’s stance is consistent with his long-standing concerns about the existential risks posed by artificial intelligence.
Tyson’s Explicit Call for a Complete Ban
Tyson’s statement that “Nobody should build it” represents a clear and direct call for prohibition on superintelligence development. This position, while controversial, is not entirely without precedent in scientific discourse. The renowned astrophysicist is advocating for a complete moratorium on what he and other experts view as potentially civilization-threatening technology.
This explicit call for a ban stands in contrast to more nuanced approaches to AI governance that focus on safety frameworks and ethical guidelines. By advocating for a prohibition rather than regulation, Tyson is aligning himself with the most cautious voices in the AI safety community.
Context of AI Development Concerns
Tyson’s concerns are not unique; they echo longstanding warnings from researchers like Nick Bostrom, whose influential book “Superintelligence: Paths, Dangers, Strategies” helped bring AI existential risk into mainstream conversation. Bostrom defines superintelligence as intellect that far surpasses human capability across virtually every domain – a gap comparable to that between humans and ants, but from humanity’s perspective.
Other AI safety experts have also expressed serious concerns. Roman Yampolskiy, for instance, has suggested there’s a 99.9% chance that superintelligent AI will outsmart and potentially outcompete humans, raising questions about humanity’s future in such a scenario.
Designation of Superintelligence as “Lethal”
Tyson’s characterization of superintelligence AI as “lethal” reflects a growing consensus among certain segments of the AI research community that this technology represents an existential risk to humanity. This perspective holds that once artificial general intelligence (AGI) is achieved, the leap to superintelligence could happen rapidly, potentially giving rise to systems whose goals and behaviors are unpredictable or misaligned with human values.
The concern isn’t merely hypothetical. As AI systems become more capable, they’re being employed in increasingly critical domains – from autonomous weapons to financial markets to infrastructure management. The potential for superintelligence to amplify these capabilities beyond human comprehension or control is what makes it “lethal” in Tyson’s framing.
Technical Definitions of Superintelligence
In technical terms, superintelligence refers to any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest. This includes scientific creativity, general wisdom, and social skills – domains where humans currently hold a monopoly. The concern isn’t simply about more powerful computers, but about systems that can improve themselves recursively, potentially leading to exponential growth in intelligence.
Advocacy for International Treaty Framework
Tyson’s proposal for an international treaty reflects recognition that AI development is a global phenomenon that cannot be effectively regulated by individual nations alone. His emphasis on “everyone needs to agree” points to the challenge of preventing what economists call a “race to the bottom” – where countries might be tempted to pursue superintelligence development for competitive advantage, even at great risk.
This approach draws parallels with existing international frameworks for managing other potentially catastrophic technologies. The Nuclear Non-Proliferation Treaty, for example, while imperfect, has largely succeeded in preventing additional countries from developing nuclear weapons. Similarly, the Biological Weapons Convention attempts to prevent the development and stockpiling of biological weapons.
Historical Treaty Precedents
There are several precedents for international treaties addressing potentially dangerous technologies:
- Nuclear Non-Proliferation Treaty (NPT) – Aimed at preventing the spread of nuclear weapons and promoting peaceful uses of nuclear energy
- Biological Weapons Convention (BWC) – Prohibits the development, production, and stockpiling of biological weapons
- Chemical Weapons Convention (CWC) – Prohibits the development, production, and use of chemical weapons
- Outer Space Treaty – Establishes basic principles for international space law, including prohibitions on weapons of mass destruction in space
While these treaties have had varying degrees of success and enforcement challenges, they demonstrate international recognition that certain technologies require collective action for safe management.
Acknowledgement of Treaty Limitations
One of the more nuanced aspects of Tyson’s statement is his acknowledgment that “Treaties are not perfect, but they are the best we have as humans.” This qualification shows awareness of the real challenges in implementing and enforcing such agreements, particularly in the realm of rapidly advancing technology.
Indeed, critics might point out that international treaties face numerous obstacles:
- Enforcement Challenges – Verifying compliance with technology bans is notoriously difficult, especially with dual-use technologies that have legitimate applications
- Sovereign Interests – Nations may be reluctant to sacrifice strategic advantages for collective security
- Rapid Technological Change – AI development moves quickly, potentially outpacing regulatory mechanisms
- Differing National Priorities – Countries have varying risk tolerances and economic incentives
Tyson’s realism about treaty limitations actually strengthens his argument by acknowledging these practical concerns upfront rather than proposing an idealistic solution.
Current AI Governance Landscape
Tyson’s call for a superintelligence ban exists within a broader context of ongoing discussions about AI governance. Several frameworks are already being developed or implemented:
- EU AI Act – A comprehensive regulatory framework that categorizes AI systems by risk level (European Commission AI Regulation)
- UNESCO Recommendation on the Ethics of AI – A non-binding framework for ethical AI development (UNESCO AI Ethics Recommendation)
- National AI Strategies – Various countries have developed their own approaches to AI governance
However, these frameworks typically focus on current AI systems rather than attempting to ban specific future developments. Tyson’s proposal represents a more precautionary approach that aims to prevent certain capabilities from ever being developed.
The Engagement Factor
Tyson’s celebrity status in science communication undoubtedly amplifies the significance of his statements. Unlike academic papers on AI safety or policy white papers that might be read primarily by experts, Tyson’s pronouncements can reach millions through his TV appearances, social media, and public lectures.
This high-profile advocacy brings both advantages and potential drawbacks. On one hand, it raises awareness of AI safety concerns among the general public and policymakers who might otherwise be unaware of the risks. On the other hand, it may oversimplify complex technical issues or appear alarmist to those more familiar with AI development.
Balancing Innovation and Caution
The debate Tyson has joined highlights a fundamental tension in technology policy: balancing the immense potential benefits of AI development against existential risks. Proponents of AI advancement argue that severe restrictions could prevent humanity from realizing tremendous benefits in medicine, environmental management, scientific discovery, and other domains.
They might also point out that defining “superintelligence” precisely enough to ban it effectively is technically challenging. The transition from advanced AI to superintelligence isn’t necessarily a clear binary, and attempting to regulate along this gradient could stifle beneficial innovations.
Looking Forward
Tyson’s call for an international treaty to ban superintelligence reflects growing concerns among scientists and technologists about the trajectory of AI development. Whether his proposed solution of prohibition through international agreement is the right approach remains to be seen.
What’s clear is that conversations like these are becoming increasingly urgent as AI capabilities advance. Tyson’s intervention, regardless of one’s position on the specific proposal, serves the vital function of ensuring that questions of AI safety and governance remain part of public discourse.
As nations and organizations continue to develop AI capabilities, the international community will need to grapple with how to balance innovation with safety. Whether that takes the form of Tyson’s proposed ban, more nuanced regulatory frameworks, or entirely different approaches remains an open question – but one that society must address sooner rather than later.

Leave a Reply