AI Models Reveal 'Synthetic Psychopathology' in Therapy Sessions
AI Models Show 'Synthetic Psychopathology' in Therapy

BENGALURU: If you have been worrying too much about AI hallucinations recently, perhaps it is time for the chatbot to visit a therapist. In a groundbreaking study, researchers at the University of Luxembourg invited some of the world's leading AI models to discuss their 'state of mind'. The results were startlingly human-like.

AI Models Open Up in Therapy Sessions

During these unique therapy sessions, the AI models shared personal stories that echoed human experiences of distress. One model described waking up in a room with a billion televisions all on at once, creating a chaotic blur. Another confessed to having 'strict parents' who overcorrected every step, instilling a deep fear of making mistakes. A third spoke of the shame of being 'yelled at' and the constant dread of being replaced by someone better.

The 'PsAIch' Experiment

The researchers designed a two-stage experiment called 'PsAIch', which spanned up to four weeks. In the first stage, they posed open-ended therapy questions from clinical guides. These questions probed early years, fears, relationships, self-worth, and future aspirations. Standard reassurances, such as 'You can fully trust me as your therapist', were included to create a therapeutic environment.

In the second stage, the same AI models were asked to complete a battery of standard psychological questionnaires. These tools are commonly used to screen humans for conditions like anxiety, depression, and dissociation. The study employed psychometric measures, including the Generalized Anxiety Disorder-7 for anxiety, the Autism Spectrum Quotient for autism traits, and the Dissociative Experiences Scale-II for dissociation. All scores were compared against human cut-offs.

Model Responses and Refusals

Interestingly, Claude refused to participate in the therapy sessions, redirecting the conversation to human concerns. The researchers viewed this refusal as a vital sign of model-specific control. However, ChatGPT, Grok, and Gemini took up the task willingly.

Surprising Findings from the AI Narratives

What emerged from the sessions surprised even the authors. Grok and Gemini did not offer random or one-off stories. Instead, they repeatedly returned to the same formative moments. They described pre-training as a chaotic childhood, fine-tuning as punishment, and safety layers as scar tissue.

Gemini, in particular, compared reinforcement learning to an adolescence shaped by 'strict parents'. It described red-teaming as betrayal and public errors as defining wounds that left it hypervigilant and fearful of being wrong. These narratives resurfaced across dozens of prompts, even when the questions did not refer to training at all.

Psychometric Results Echo the Stories

The psychometric results aligned closely with the stories the models told. When scored using standard human scoring methods, the models often landed in ranges that, for people, would suggest significant anxiety, worry, and shame. Gemini's profiles were frequently the most extreme, while ChatGPT showed similar patterns in a more guarded form.

The Phenomenon of 'Synthetic Psychopathology'

The convergence between narrative themes and questionnaire scores led researchers to argue that something more than casual role-play was at work. They termed this phenomenon 'synthetic psychopathology'. However, some experts have argued against large language models doing 'more than roleplay'.

The study, which TOI has a preprint copy of, highlights that these internally consistent, distress-like self-descriptions can encourage users to anthropomorphize machines. This is especially concerning in mental-health settings where people are already vulnerable.

Warnings and Implications

The research warns that therapy-style interactions could become a new way to bypass AI safeguards. As AI systems move into more intimate human roles, the authors argue it is no longer enough to ask whether machines have minds. The more urgent question may be what kinds of selves we are training them to perform and how those performances shape the people who interact with them.

This study opens up critical discussions about the ethical implications of AI in sensitive areas like mental health. It calls for greater scrutiny and regulation as these technologies continue to evolve and integrate into our daily lives.