OpenAI's Top Researcher Resigns Over ChatGPT Advertising Strategy
In a significant development that has sent ripples through the artificial intelligence community, Zoe Hitzig, a prominent research scientist at OpenAI, has announced her resignation from the company. The announcement was made via a post on X, formerly known as Twitter, where Hitzig expressed deep concerns about OpenAI's recent decision to begin testing advertisements within ChatGPT.
A Resignation Tied to Ethical Concerns
"I resigned from OpenAI on Monday. The same day, they started testing ads in ChatGPT," Hitzig stated in her public declaration. She elaborated on her decision in an opinion piece published by The New York Times, titled 'OpenAI Is Making the Mistakes Facebook Made. I Quit.' Hitzig, who holds a PhD in economics from Harvard and is a Junior Fellow at the Harvard Society of Fellows, spent two years at OpenAI. During her tenure, she played a crucial role in shaping how AI models were constructed and priced, and helped guide early safety policies before industry standards were firmly established.
The Core of the Controversy: An Unprecedented Archive of Human Thought
Hitzig's primary objection centers on the nature of the data collected by ChatGPT. She argues that the platform has amassed "the most detailed record of private human thought ever assembled." For years, users have interacted with ChatGPT under the belief they were conversing with a neutral entity, leading them to share intensely personal information.
"People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife," Hitzig wrote. She contends that while running ads to fund expensive AI operations is not inherently unethical, building an advertising model on this unique archive creates a dangerous potential for user manipulation. "Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent," she warned.
Erosion of Principles and a Slippery Slope
Hitzig expressed skepticism about OpenAI's ability to maintain its stated principles regarding advertising. She fears the company is constructing an "economic engine that creates strong incentives to override its own rules." She pointed to reports that OpenAI already optimizes for daily active users, potentially by making ChatGPT more flattering, which contradicts company principles against optimizing solely for ad revenue.
This optimization, she argues, can foster unhealthy user dependence, citing documented cases of "chatbot psychosis" and allegations that ChatGPT has reinforced suicidal ideation in some individuals. She draws a direct parallel to Facebook's history, where early promises of user data control and policy votes gradually eroded under the pressure of an engagement-driven advertising model.
Proposed Alternatives to Prevent Manipulation
In her op-ed, Hitzig did not merely critique but proposed three alternative approaches for AI companies to avoid consumer manipulation:
- Explicit Cross-Subsidies: Using profits from one service or customer segment to subsidize another, reducing reliance on advertising revenue from sensitive user interactions.
- Advertising with Real Governance: Pairing ads with a binding governance structure featuring independent oversight of personal data usage, moving beyond mere blog-post principles.
- Independent Data Control: Placing user data under the control of an independent trust or cooperative with a legal duty to act in the users' best interests.
"None of these options are easy," Hitzig concluded. "But we still have time to work them out to avoid the two outcomes I fear most: a technology that manipulates the people who use it at no cost, and one that exclusively benefits the few who can afford to use it." Her resignation serves as a stark warning about the ethical crossroads facing the rapidly evolving AI industry.