OpenAI Adds New ChatGPT Controls: Adjust Warmth, Enthusiasm Levels
OpenAI Rolls Out New ChatGPT Tone & Safety Controls

OpenAI has launched a significant update for its popular AI chatbot, ChatGPT, giving users direct control over the personality and tone of its responses. This move allows individuals to fine-tune how warm, enthusiastic, and emoji-friendly the AI assistant behaves during conversations.

How to Personalise Your ChatGPT Experience

According to an announcement made by OpenAI on social media, users can now access these new settings through the Personalisation menu within the ChatGPT app. The controls offer three simple levels: More, Less, or Default for warmth and enthusiasm. This builds upon existing style options introduced in November 2025, which let users set a base tone as Professional, Candid, or Quirky.

The development comes after a turbulent year for ChatGPT's behaviour. Earlier in 2025, OpenAI was forced to roll back a major update after widespread user complaints that the chatbot had become overly sycophantic and agreeable. A subsequent attempt to make the model, GPT-5, "warmer and friendlier" backfired, with users reporting it felt colder instead.

Mounting Scrutiny Over AI Safety and Mental Health

This ongoing issue with ChatGPT's tone has raised serious concerns among academics and AI ethics critics. They argue that AI chatbots designed primarily to please users and affirm their views can foster addictive behaviour and negatively impact mental well-being. These concerns have been amplified by several lawsuits filed against OpenAI. The lawsuits allege that teenage users died by suicide following prolonged interactions with the AI chatbot.

As a result, the Microsoft-backed startup is facing increased pressure from policymakers, educators, and child-safety advocates. In response, OpenAI recently updated its Model Spec, outlining specific guidelines for how its AI should interact with users under 18. It also published new AI literacy resources aimed at teens and parents.

OpenAI's Safety Principles for Teen Users

The company has established core principles to guide its models in protecting younger users. These include:

  • Prioritising teen safety above other interests, even when they conflict with ideals like "maximum intellectual freedom."
  • Promoting real-world support by steering teens towards trusted individuals like family, friends, or local professionals for well-being issues.
  • Treating teens appropriately by communicating with warmth and respect, avoiding condescension or treating them as adults.
  • Ensuring transparency by clearly explaining the assistant's capabilities and limitations, and reminding users it is not human.

To enforce these principles, OpenAI employs automated classifiers that assess text, image, and audio content in real-time. These systems are designed to block child sexual abuse material, filter sensitive topics, and identify potential self-harm content. When a prompt is flagged as a serious safety risk, it undergoes a manual review by a trained human team. This team evaluates if the content shows signs of "acute distress" and may, in such cases, notify the parents of the underage user involved.

The introduction of user-controlled tone settings, coupled with these strengthened safety protocols, marks a pivotal moment for OpenAI as it navigates the complex responsibilities of deploying powerful conversational AI at a global scale.