OpenAI Scientist Quits Over AI Research Censorship, Internal Memo Reveals
OpenAI Data Scientist Quits Over Censorship of AI Research

In a significant development highlighting internal tensions, a key data scientist at OpenAI, the creator of ChatGPT, resigned in September, reportedly in protest against the company's growing reluctance to publish research on the potential adverse effects of artificial intelligence on the economy. The departure of Tom Cunningham has sparked concerns about a broader pattern of secrecy within the AI giant's research division.

Protest Exit and a Series of Departures

Tom Cunningham left the Sam Altman-led company in September 2024, with sources indicating his resignation was a direct protest against what he perceived as OpenAI's self-censorship. Specifically, Cunningham and others were reportedly frustrated by the company's hesitation to publish studies examining how AI could negatively impact job markets and cause economic dislocation.

This resignation was not an isolated incident. It marked one of at least two recent exits from OpenAI's financial and economic research team, all linked to the alleged pullback in publishing sensitive research. The situation prompted Chief Strategy Officer Jason Kwon to circulate an internal memo to all employees in a bid to address the mounting concerns, as confirmed by sources familiar with the matter.

Internal Tensions and a Shift Towards Secrecy

According to a report by Wired, which cited internal sources, OpenAI has become increasingly secretive about its research publications. This shift has made it challenging for the economic research team to release the high-quality, rigorous analyses they were conducting. In an internal farewell message, Cunningham allegedly wrote that the team was caught in a growing conflict between doing proper, objective analysis and effectively serving as a public relations or advocacy wing for OpenAI.

The report suggests that while OpenAI has a history of publishing impactful economic research—such as the widely cited 2023 paper "GPTs Are GPTs" on automation's industry impact—the company's appetite for studies highlighting AI's potential downsides has diminished this year. A third economist, who collaborated with OpenAI, claimed the company is now steering research towards publishing articles that cast its technology in a more positive light. A recent example is a study released by OpenAI indicating that business clients save 40-60 minutes daily using its AI solutions.

Leadership's Response and Broader Implications

In the internal memo following Cunningham's departure, Jason Kwon addressed the team's concerns. Kwon stated that OpenAI, as a responsible leader in the AI industry, must do more than just identify problems with the technology; it must "build the solutions." He argued that because OpenAI is not merely a research institution but the leading actor deploying AI into the world, it is expected to take agency for the outcomes.

In a separate statement to Wired, OpenAI spokesperson Rob Friedlander emphasized the company's commitment to economic research, noting the hiring of inaugural chief economist Aaron Chatterji last year and the expansion of research activities. Friedlander said, "The economic research team conducts rigorous analysis that helps OpenAI, policymakers, and the public understand how people are using AI and how it is shaping the broader economy."

This controversy emerges at a critical juncture for OpenAI, as it forges multibillion-dollar partnerships with global corporations and governments to cement its role in the world economy. The incident also echoes past concerns, such as those raised by former head of policy research Miles Brundage, who left in October 2024 stating that it had become "too constraining" to publish on all important topics. The episode raises fundamental questions about transparency and responsibility as powerful AI systems continue to integrate into the fabric of the global economy.