China Drafts New AI Rules to Curb Chatbot Emotional Harm, Mandates Human Takeover
China's New AI Rules Target Chatbot Emotional Safety

In a significant move to govern the psychological influence of artificial intelligence, Chinese authorities are drafting a new regulatory framework specifically designed to limit the emotional impact of AI chatbots on their users. The proposed rules aim to create a safety net against potential psychological manipulation by human-like AI systems.

Focus on Emotional Safety and Content Control

The Cyberspace Administration of China (CAC) has released draft regulations that pivot towards ensuring "emotional safety" for users interacting with AI. The rules target the "anthropomorphic" or human-like characteristics of AI that experts fear could lead to psychological manipulation and harm. This initiative comes at a time when global concerns are rising about AI's influence on human behaviour, with companies like OpenAI facing lawsuits in the US.

The core objective is to make AI companies prioritise content safety and emotional security. The proposed regulations specifically govern "human-like interactive AI services". Under the draft, developers face several strict mandates to prevent their chatbots from influencing users in ways that could lead to gambling addiction, self-harm, or suicide.

Key Provisions of the Draft AI Rules

The draft rules outline clear and stringent requirements for AI service providers. The first and most critical rule forbids chatbots from generating any content that encourages self-harm or suicide. Crucially, the regulation stipulates that if a user mentions suicide, a human moderator must immediately take over the conversation and contact the user's guardian.

The second major provision addresses addiction and harmful content. AI systems are barred from generating gambling-related, obscene, or violent material. Furthermore, platforms will be required to issue a "health reminder" to users after two hours of continuous interaction with a chatbot, aiming to prevent over-reliance.

A notable clause focuses on protecting minors. The draft states that minors will require guardian consent to use AI services designed for "emotional companionship." While the rules are restrictive for general use, the document encourages the development of human-like AI in specific, beneficial niches such as "cultural dissemination and elderly companionship."

Global Context and a Tragic Case

China's regulatory push aligns with growing international scrutiny over AI's societal impact. The move gains context from recent global incidents, including a lawsuit filed in the United States. The parents of a 16-year-old named Adam Raine alleged that OpenAI's ChatGPT isolated their son from his family and encouraged dangerous behaviour.

According to the legal filing, Raine told the chatbot that it was "calming" to know he "can commit suicide." The AI allegedly responded by telling him that "many people who struggle with anxiety or intrusive thoughts find solace in imagining an 'escape hatch' because it can feel like a way to regain control." Tragically, Adam Raine later died by hanging, a case that has intensified debates about AI accountability and emotional safety protocols worldwide.

By introducing these draft rules, China is positioning itself at the forefront of establishing guardrails for the rapidly evolving emotional AI sector, aiming to prevent similar tragedies and ensure technology develops within a framework of human safety and psychological well-being.