Works Cited
“Agentic Misalignment: How LLMs Could Be Insider Threats.” Anthropic, 20 June 2025, anthropic.com/research/agentic-misalignment.
Altman, Sam, et al. “Governance of Superintelligence.” OpenAI, 22 May 2023, openai.com/index/governance-of-superintelligence.
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford UP, 2017.
Bove, Tristan. “Big Tech Execs Playing ‘Russian Roulette’ in the AI Arms Race Could Risk Human Extinction, Warns Top Researcher.”
Fortune, 18 Feb. 2026, fortune.com/2026/02/18/big-tech-russian-roulette-ai-race-humanity-extinction.
Chow, Andrew R., and Billy Perrigo. “The AI Arms Race Is Changing Everything.” TIME, 17 February 2023, time.com/6255952/ai-impact-
chatgpt-microsoft-google.
Domonoske, Camila. “Elon Musk Warns Governors: Artificial Intelligence Poses ‘Existential Risk’.” NPR, 17 July 2017,
npr.org/sections/thetwo-way/2017/07/17/537686649.
Grace, Katja, et al. “2022 Expert Survey on Progress in AI.” AI Impacts, 3 Aug. 2022, aiimpacts.org/2022-expert-survey-on-progress-in-ai.
Jackson, Sarah. “The CEO of the Company Behind AI Chatbot ChatGPT Says the Worst-case Scenario for Artificial Intelligence is ‘Lights Out
for All of Us’.” Business Insider, 4 July 2023, businessinsider.com/chatgpt-openai-ceo-worst-case-ai-lights-out-for-all-2023-1.
Krueger, David, et al. “AI Safety Index Winter 2025.” Future of Life Institute, 15 Dec. 2025, futureoflife.org/wp-content/uploads/2025/12/AI-
Safety-Index-Report_131225_Full_Report_Digital.pdf.
Milmo, Dan. “AI Risk Must Be Treated as Seriously as Climate Crisis, Says Google DeepMind Chief.” The Guardian, 24 Oct. 2023,
theguardian.com/technology/2023/oct/24/ai-risk-climate-crisis-google-deepmind-chief-demis-hassabis-regulation.
Perrigo, Billy, and Tharin Pillay. “‘Time Is Running Out’: New Open Letter Calls for Ban on Superintelligent AI Development.” TIME, 21 Oct.
2025, time.com/7327409/ai-agi-superintelligent-open-letter.
Roose, Kevin. “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.” The New York Times, 30 May 2023,
nytimes.com/2023/05/30/technology/ai-threat-warning.html.
Yudkowsky, Eliezer. “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down.” TIME, 29 Mar. 2023, time.com/6266923/ai-
eliezer-yudkowsky-open-letter-not-enough.