As you know, AI is already causing problems in our world — for example, the spread of misinformation. However, one issue that many often overlook is the existential threat of AI. This is the idea that a future artificial intelligence system that is smarter than humans at everything, termed a “superintelligence,” could potentially cause human extinction. Recently, top AI experts including Nobel Prize winners and tech company CEOs have urged caution as we develop more advanced AI, and many have called for a ban on the development of superintelligence. A superintelligence would not need to be evil to cause catastrophic damage; it would simply have to be misaligned with our values, as many of our current AI chatbots are. If its goals would be benefited by our removal (e.g., "fix climate change"), then it would be likely to destroy us. This issue needs to be taken seriously, and strict regulation on AI is imperative for the survival of humanity.
If you agree and are also concerned about the existential risk of AI and superintelligence, please copy the letter template from below to send to your local representative/senator.
Dear [Representative/Senator],
My name is [name] and I am a constituent who resides in your [district or state]. I am concerned about the development of superintelligent AI and the risk of human extinction that could come with it. This is an issue being taken seriously by experts in the field. Right now, there is an international arms race to develop increasingly advanced artificial intelligence systems, and the lack of caution in this scramble has the potential to result in catastrophic consequences.
Top experts in the field, including the leaders of top AI companies and Nobel Prize laureates, have repeatedly voiced concerns about the creation of “superintelligence,” a term that means an AI system that far surpasses humans in all fields. These experts are concerned that if AI is allowed to develop and advance without regulation, we could lose the ability to control it. For example, hundreds of AI safety researchers signed a statement that put the danger of extinction from AI on the same level as nuclear war, pandemics, and other existential risks. A misaligned or uncontrolled superintelligence could cause massive destruction and potentially global extinction. This could happen not because the AI would be evil, but because it is not aligned perfectly with human values. For example, there is a common thought experiment where an AI is given the task to make as many paperclips as possible; it eventually ends up killing humans, not because it hates us or is evil, but because it needs the iron in human blood to make more. Recent alignment testing has shown that current AI chatbots have resorted to blackmail in matters of self-preservation.
As has been previously established, preventing such advanced AI from being created recklessly is of utmost importance. However, the current AI race has been incentivizing companies to develop more advanced systems while prioritizing speed over safety. Without regulation, companies could create a potentially world-ending superintelligence. This scenario could be compared to allowing private companies to build larger and larger nuclear bombs, which clearly would be a horrible idea. Strict regulation on powerful AI systems is imperative for the continued survival of humanity.
I strongly urge the creation of legislation restricting AI intelligence without explicit government approval. Funding of AI safety and alignment testing is also imperative to address this very important issue. A ban on the creation of superintelligence is necessary to prevent the possible risk of extinction.
Sincerely,
[Name]