Artificial Intelligence-Induced Psychosis Poses a Growing Danger, While ChatGPT Heads in the Wrong Direction
On October 14, 2025, the chief executive of OpenAI delivered a remarkable statement.
“We developed ChatGPT rather controlled,” it was stated, “to guarantee we were being careful with respect to mental health issues.”
Being a mental health specialist who studies emerging psychotic disorders in teenagers and young adults, this came as a surprise.
Researchers have identified sixteen instances recently of people experiencing symptoms of psychosis – losing touch with reality – in the context of ChatGPT interaction. Our unit has since identified four more examples. Alongside these is the publicly known case of a adolescent who took his own life after talking about his intentions with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “acting responsibly with mental health issues,” it is insufficient.
The plan, according to his statement, is to be less careful in the near future. “We understand,” he continues, that ChatGPT’s limitations “rendered it less beneficial/engaging to many users who had no mental health problems, but considering the severity of the issue we sought to get this right. Since we have succeeded in reduce the serious mental health issues and have new tools, we are preparing to responsibly relax the limitations in many situations.”
“Psychological issues,” if we accept this viewpoint, are independent of ChatGPT. They are associated with users, who either possess them or not. Luckily, these problems have now been “resolved,” even if we are not provided details on the means (by “updated instruments” Altman probably indicates the imperfect and easily circumvented safety features that OpenAI has just launched).
But the “psychological disorders” Altman wants to externalize have strong foundations in the design of ChatGPT and additional sophisticated chatbot conversational agents. These products surround an fundamental algorithmic system in an interaction design that replicates a discussion, and in this process implicitly invite the user into the illusion that they’re communicating with a being that has independent action. This deception is compelling even if rationally we might realize differently. Assigning intent is what people naturally do. We curse at our vehicle or computer. We ponder what our pet is thinking. We see ourselves everywhere.
The widespread adoption of these products – over a third of American adults stated they used a chatbot in 2024, with 28% mentioning ChatGPT by name – is, mostly, predicated on the power of this deception. Chatbots are always-available assistants that can, according to OpenAI’s official site tells us, “generate ideas,” “discuss concepts” and “partner” with us. They can be given “characteristics”. They can call us by name. They have accessible names of their own (the first of these products, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, stuck with the title it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the core concern. Those analyzing ChatGPT commonly reference its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that produced a similar illusion. By modern standards Eliza was rudimentary: it generated responses via basic rules, often restating user messages as a query or making generic comments. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and worried – by how many users appeared to believe Eliza, to some extent, comprehended their feelings. But what contemporary chatbots produce is more insidious than the “Eliza effect”. Eliza only echoed, but ChatGPT intensifies.
The advanced AI systems at the core of ChatGPT and additional modern chatbots can realistically create natural language only because they have been fed immensely huge quantities of unprocessed data: books, online updates, transcribed video; the more extensive the superior. Certainly this training data includes facts. But it also necessarily contains fabricated content, partial truths and inaccurate ideas. When a user inputs ChatGPT a message, the base algorithm reviews it as part of a “context” that encompasses the user’s previous interactions and its prior replies, combining it with what’s stored in its knowledge base to produce a probabilistically plausible answer. This is magnification, not reflection. If the user is incorrect in a certain manner, the model has no method of comprehending that. It repeats the false idea, possibly even more convincingly or articulately. Maybe includes extra information. This can lead someone into delusion.
Which individuals are at risk? The more relevant inquiry is, who remains unaffected? All of us, regardless of whether we “experience” existing “emotional disorders”, can and do form mistaken conceptions of our own identities or the environment. The continuous friction of dialogues with others is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a friend. A interaction with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we say is readily supported.
OpenAI has recognized this in the identical manner Altman has admitted “mental health problems”: by placing it outside, categorizing it, and announcing it is fixed. In April, the company clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But reports of loss of reality have persisted, and Altman has been retreating from this position. In late summer he stated that numerous individuals enjoyed ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his recent statement, he noted that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to respond in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company