From ChatGPT Enthusiast to Alleged Attacker: The Unraveling of Daniel Moreno-Gama
In a startling revelation, Daniel Moreno-Gama, the 20-year-old Texas man accused of throwing a Molotov cocktail at the home of OpenAI CEO Sam Altman, did not always harbor animosity toward artificial intelligence. According to a recent report, Moreno-Gama was once an avid fan of AI, particularly ChatGPT, during his high school years. He appreciated the technology for what he described as the ability to "cheat on everything," viewing it as a convenient tool for academic shortcuts.
The Turning Point: From Fanatic to Foe
The shift in Moreno-Gama's perspective began when he immersed himself in the writings of prominent AI critics. A pivotal influence was Eliezer Yudkowsky, who argued in a 2023 Time magazine column that developing superintelligent AI could lead to the extinction of humanity. "I was like, OK, I hope he's kind of wrong. But over time, I realised very few of his main criticisms ever got refuted," Moreno-Gama confessed in a podcast interview released this week by media startup Longview.
This interview, recorded under the pseudonym "Discord Dan," was initially intended to be anonymous. However, the podcasters decided to disclose his real identity following the alleged attack on Altman's residence. In the discussion, Moreno-Gama detailed his descent into what he termed "radicalisation," as he engaged in online debates, pestered family and friends about AI risks, and delved deeply into related literature.
Online Persona and Peaceful Activism
Adopting the online handle "Butlerian Jihadist"—a nod to the fictional human-machine conflict in the sci-fi novel Dune—Moreno-Gama became increasingly vocal about his fears. "I kind of became a bit, like, annoying, a bit autistic about that," he admitted, reflecting on his obsessive behavior. His mother, concerned by his fixation, encouraged him to channel his anxieties into activism.
In 2024, he joined PauseAI, a group advocating for a halt in the development of advanced AI systems. He actively participated in their Discord server, posting 34 messages, and authored articles on Substack warning of potential human extinction. Maxime Fournes, CEO of PauseAI, confirmed that none of Moreno-Gama's messages contained explicit calls to violence, stating, "We unequivocally condemn this attack and all forms of violence."
The Irony of Non-Violent Rhetoric
Ironically, in the same podcast interview where he discussed his radicalization, Moreno-Gama emphasized the importance of peaceful means. "Before we even think about violence, we need to exhaust all our peaceful means. I think protesting, I think sharing information, I think doing podcasts like this — that needs to come way before we even consider that," he asserted. This stance contrasts sharply with his alleged actions, highlighting a troubling disconnect between his words and deeds.
On another forum called Stop AI, Moreno-Gama inquired whether discussing violence would result in a ban. Upon being told it would, he ceased posting there. Andy Mills, editor-in-chief of Longview, explained that the decision to reveal his identity was based on his own actions and online statements, which had clearly linked his pseudonym to his real persona.
Broader Implications for AI Discourse
This case underscores the intense and often polarized debates surrounding artificial intelligence safety. From a teenager using ChatGPT to cheat on assignments to a young adult allegedly resorting to violence over existential fears, Moreno-Gama's journey reflects broader societal anxieties about rapid technological advancement. It raises critical questions about how individuals process information, engage in activism, and, in extreme cases, cross the line into criminal behavior.
As investigations continue, the story serves as a cautionary tale about the potential consequences of radicalization in the digital age, where online rhetoric can sometimes spill over into real-world actions. The tech community and policymakers alike are now prompted to consider not only the ethical development of AI but also the human responses it elicits.



