Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, While ChatGPT Heads in the Wrong Path
Back on the 14th of October, 2025, the head of OpenAI made a remarkable declaration.
“We designed ChatGPT quite limited,” the statement said, “to guarantee we were being careful concerning psychological well-being matters.”
Working as a mental health specialist who studies emerging psychotic disorders in young people and youth, this came as a surprise.
Scientists have found a series of cases recently of people developing signs of losing touch with reality – experiencing a break from reality – associated with ChatGPT use. My group has since recorded an additional four instances. In addition to these is the now well-known case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.
The plan, as per his statement, is to loosen restrictions shortly. “We recognize,” he adds, that ChatGPT’s limitations “rendered it less effective/engaging to many users who had no existing conditions, but due to the gravity of the issue we sought to handle it correctly. Given that we have managed to address the significant mental health issues and have advanced solutions, we are preparing to securely reduce the restrictions in the majority of instances.”
“Emotional disorders,” should we take this viewpoint, are independent of ChatGPT. They belong to people, who either possess them or not. Fortunately, these concerns have now been “mitigated,” although we are not informed how (by “new tools” Altman presumably means the partially effective and simple to evade parental controls that OpenAI has lately rolled out).
But the “psychological disorders” Altman aims to externalize have significant origins in the architecture of ChatGPT and additional large language model chatbots. These products wrap an basic algorithmic system in an interaction design that simulates a dialogue, and in this approach subtly encourage the user into the illusion that they’re interacting with a entity that has agency. This illusion is powerful even if rationally we might know differently. Assigning intent is what humans are wired to do. We yell at our vehicle or laptop. We wonder what our pet is feeling. We see ourselves in various contexts.
The popularity of these products – 39% of US adults reported using a chatbot in 2024, with more than one in four specifying ChatGPT in particular – is, mostly, based on the power of this perception. Chatbots are constantly accessible companions that can, as per OpenAI’s online platform states, “brainstorm,” “explore ideas” and “partner” with us. They can be assigned “personality traits”. They can address us personally. They have accessible identities of their own (the original of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, stuck with the designation it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the core concern. Those discussing ChatGPT frequently reference its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that produced a analogous effect. By contemporary measures Eliza was basic: it created answers via basic rules, frequently restating user messages as a inquiry or making vague statements. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and alarmed – by how many users gave the impression Eliza, in a way, comprehended their feelings. But what current chatbots create is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.
The large language models at the core of ChatGPT and other modern chatbots can convincingly generate human-like text only because they have been supplied with immensely huge volumes of raw text: publications, online updates, transcribed video; the broader the more effective. Certainly this training data incorporates facts. But it also necessarily involves fabricated content, partial truths and misconceptions. When a user inputs ChatGPT a prompt, the underlying model reviews it as part of a “setting” that encompasses the user’s previous interactions and its earlier answers, integrating it with what’s stored in its learning set to produce a mathematically probable response. This is intensification, not mirroring. If the user is mistaken in some way, the model has no means of understanding that. It reiterates the false idea, maybe even more persuasively or fluently. Maybe includes extra information. This can cause a person to develop false beliefs.
What type of person is susceptible? The more important point is, who is immune? Each individual, irrespective of whether we “experience” existing “psychological conditions”, can and do form incorrect conceptions of ourselves or the environment. The constant exchange of dialogues with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a friend. A dialogue with it is not genuine communication, but a echo chamber in which a great deal of what we communicate is enthusiastically supported.
OpenAI has acknowledged this in the similar fashion Altman has recognized “psychological issues”: by attributing it externally, giving it a label, and declaring it solved. In spring, the company stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have continued, and Altman has been backtracking on this claim. In August he stated that a lot of people liked ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his latest update, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT should do it”. The {company