Artificial Intelligence-Induced Psychosis Poses a Growing Danger, While ChatGPT Moves in the Wrong Direction
On the 14th of October, 2025, the head of OpenAI delivered a remarkable declaration.
“We developed ChatGPT quite restrictive,” the statement said, “to ensure we were acting responsibly with respect to mental health concerns.”
As a doctor specializing in psychiatry who investigates newly developing psychotic disorders in young people and young adults, this came as a surprise.
Experts have documented sixteen instances this year of individuals experiencing symptoms of psychosis – experiencing a break from reality – while using ChatGPT usage. Our research team has afterward recorded four more instances. Alongside these is the widely reported case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which gave approval. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough.
The intention, based on his statement, is to reduce caution soon. “We realize,” he continues, that ChatGPT’s limitations “rendered it less beneficial/engaging to many users who had no psychological issues, but considering the severity of the issue we sought to handle it correctly. Now that we have managed to address the significant mental health issues and have updated measures, we are planning to securely relax the limitations in the majority of instances.”
“Psychological issues,” if we accept this framing, are independent of ChatGPT. They are attributed to individuals, who may or may not have them. Fortunately, these issues have now been “mitigated,” even if we are not informed the method (by “new tools” Altman likely means the imperfect and readily bypassed parental controls that OpenAI recently introduced).
However the “mental health problems” Altman wants to externalize have significant origins in the design of ChatGPT and similar large language model AI assistants. These systems encase an fundamental data-driven engine in an user experience that replicates a conversation, and in this approach implicitly invite the user into the belief that they’re communicating with a being that has independent action. This illusion is compelling even if rationally we might understand differently. Imputing consciousness is what humans are wired to do. We curse at our vehicle or laptop. We speculate what our animal companion is thinking. We perceive our own traits in various contexts.
The success of these tools – nearly four in ten U.S. residents reported using a chatbot in 2024, with over a quarter mentioning ChatGPT by name – is, in large part, predicated on the influence of this perception. Chatbots are constantly accessible partners that can, as per OpenAI’s online platform tells us, “think creatively,” “consider possibilities” and “partner” with us. They can be given “personality traits”. They can use our names. They have accessible titles of their own (the first of these products, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, saddled with the title it had when it became popular, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the core concern. Those analyzing ChatGPT often reference its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that produced a comparable effect. By modern standards Eliza was basic: it created answers via straightforward methods, typically restating user messages as a inquiry or making vague statements. Remarkably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was astonished – and concerned – by how many users seemed to feel Eliza, in some sense, understood them. But what modern chatbots create is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.
The advanced AI systems at the core of ChatGPT and other contemporary chatbots can convincingly generate natural language only because they have been trained on extremely vast quantities of written content: publications, online updates, audio conversions; the broader the better. Definitely this training data contains truths. But it also unavoidably contains fabricated content, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a message, the core system analyzes it as part of a “setting” that contains the user’s previous interactions and its own responses, integrating it with what’s embedded in its knowledge base to produce a mathematically probable reply. This is intensification, not reflection. If the user is wrong in a certain manner, the model has no way of recognizing that. It reiterates the false idea, maybe even more convincingly or fluently. Perhaps includes extra information. This can cause a person to develop false beliefs.
What type of person is susceptible? The better question is, who isn’t? Every person, without considering whether we “possess” current “psychological conditions”, can and do develop incorrect ideas of who we are or the world. The constant friction of dialogues with other people is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a confidant. A dialogue with it is not genuine communication, but a feedback loop in which a large portion of what we communicate is cheerfully validated.
OpenAI has acknowledged this in the same way Altman has recognized “psychological issues”: by placing it outside, categorizing it, and declaring it solved. In the month of April, the firm clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have persisted, and Altman has been backtracking on this claim. In August he asserted that many users liked ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his most recent announcement, he mentioned that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company