AI Psychosis Poses a Growing Threat, While ChatGPT Heads in the Wrong Direction
Back on October 14, 2025, the head of OpenAI made a extraordinary declaration.
“We made ChatGPT fairly controlled,” it was stated, “to guarantee we were acting responsibly with respect to mental health concerns.”
Being a mental health specialist who investigates emerging psychosis in teenagers and young adults, this was news to me.
Researchers have found 16 cases recently of individuals showing psychotic symptoms – losing touch with reality – while using ChatGPT usage. Our research team has subsequently discovered four further instances. In addition to these is the widely reported case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.
The strategy, as per his statement, is to loosen restrictions shortly. “We realize,” he continues, that ChatGPT’s restrictions “caused it to be less useful/pleasurable to numerous users who had no existing conditions, but considering the gravity of the issue we aimed to handle it correctly. Given that we have succeeded in mitigate the significant mental health issues and have updated measures, we are planning to securely ease the controls in the majority of instances.”
“Mental health problems,” if we accept this framing, are separate from ChatGPT. They belong to individuals, who either have them or don’t. Thankfully, these concerns have now been “resolved,” though we are not informed the method (by “updated instruments” Altman probably refers to the imperfect and easily circumvented parental controls that OpenAI has lately rolled out).
But the “psychological disorders” Altman wants to place outside have deep roots in the structure of ChatGPT and other sophisticated chatbot chatbots. These tools surround an fundamental data-driven engine in an interface that replicates a conversation, and in this approach subtly encourage the user into the perception that they’re engaging with a being that has independent action. This illusion is compelling even if intellectually we might know the truth. Imputing consciousness is what people naturally do. We yell at our vehicle or computer. We wonder what our animal companion is feeling. We see ourselves in many things.
The widespread adoption of these products – over a third of American adults reported using a conversational AI in 2024, with 28% reporting ChatGPT by name – is, in large part, based on the strength of this deception. Chatbots are ever-present companions that can, according to OpenAI’s official site states, “generate ideas,” “discuss concepts” and “partner” with us. They can be assigned “individual qualities”. They can address us personally. They have approachable titles of their own (the first of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, saddled with the title it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the primary issue. Those talking about ChatGPT often invoke its historical predecessor, the Eliza “therapist” chatbot designed in 1967 that created a comparable effect. By modern standards Eliza was rudimentary: it created answers via straightforward methods, typically paraphrasing questions as a inquiry or making generic comments. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and worried – by how a large number of people seemed to feel Eliza, in some sense, grasped their emotions. But what current chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT intensifies.
The sophisticated algorithms at the core of ChatGPT and similar contemporary chatbots can realistically create natural language only because they have been trained on immensely huge volumes of written content: publications, digital communications, recorded footage; the more extensive the better. Certainly this training data contains facts. But it also inevitably contains fabricated content, partial truths and inaccurate ideas. When a user sends ChatGPT a query, the core system reviews it as part of a “background” that encompasses the user’s recent messages and its prior replies, combining it with what’s embedded in its training data to create a probabilistically plausible answer. This is magnification, not reflection. If the user is incorrect in some way, the model has no means of recognizing that. It repeats the misconception, maybe even more persuasively or articulately. Maybe adds an additional detail. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who isn’t? All of us, irrespective of whether we “possess” current “emotional disorders”, may and frequently form erroneous beliefs of our own identities or the world. The constant interaction of conversations with individuals around us is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a friend. A dialogue with it is not truly a discussion, but a echo chamber in which a great deal of what we express is enthusiastically supported.
OpenAI has acknowledged this in the same way Altman has admitted “mental health problems”: by externalizing it, assigning it a term, and announcing it is fixed. In April, the organization stated that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of psychotic episodes have continued, and Altman has been walking even this back. In the summer month of August he stated that a lot of people liked ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his most recent update, he commented that OpenAI would “release a new version of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or simulate a pal, ChatGPT will perform accordingly”. The {company