In a significant move highlighting its escalating safety challenges, artificial intelligence leader OpenAI is offering an annual salary of $555,000 to fill a high-stakes, vacant safety position. The recruitment drive comes as the company faces mounting legal and ethical scrutiny, including wrongful death lawsuits tied to the mental health impacts of its flagship product, ChatGPT.
The High-Stakes Vacancy: Head of Preparedness
On Saturday, December 27, 2025, OpenAI CEO Sam Altman announced via a job posting on X that the company is seeking a new 'Head of Preparedness'. This executive will be tasked with spearheading the technical strategy for OpenAI's Preparedness framework. The core mandate involves anticipating potential harms and misuse of the company's advanced AI models, focusing on severe risks in domains like cybersecurity and biology.
"This will be a stressful job and you'll jump into the deep end pretty much immediately," Altman cautioned in his post, while urging qualified candidates to apply. He described the position as "a critical role at an important time." The compensation package includes the substantial base salary plus equity in the privately-held startup.
Context: Lawsuits and Internal Turmoil
The urgent need to fill this role follows a turbulent period for OpenAI's safety divisions. The company has been hit with multiple wrongful death lawsuits alleging harmful impacts of ChatGPT on users' mental well-being. An internal study by OpenAI revealed a troubling statistic: over one million ChatGPT users, representing 0.07% of its weekly active user base, exhibited signs of severe mental health crises, including mania, psychosis, or suicidal thoughts.
Altman acknowledged this preview of models' impact on mental health in 2025. The departure of key safety personnel has further complicated the situation. OpenAI's safety teams have experienced significant churn. The previous Head of Preparedness, Aleksander Madry, was reassigned in July 2024. His responsibilities were briefly handed to researchers Joaquin Quinonero Candela and Lilian Weng, but Weng left shortly after, and Candela later moved to lead recruiting.
In a more recent exit, Andrea Vallone, head of the model policy safety research team, announced in November 2025 her decision to leave by year's end. Vallone's team was instrumental in shaping ChatGPT's responses to users in mental health crises.
A Broader Industry Challenge
OpenAI's struggle to secure its safety leadership underscores a wider tension in the AI industry. Tech giants, including Alphabet and Microsoft, have aggressively pursued enterprise AI deals to offset massive spending increases. However, industry-wide returns have yet to match the heavy investments, placing additional pressure on companies to manage both commercial ambitions and profound ethical responsibilities.
The new Head of Preparedness will thus step into a role that sits at the nexus of technological frontier risks, legal liability, and urgent public concern. Their success or failure will be closely watched as a bellwether for the industry's ability to govern the powerful systems it is creating.