🔗 Share this article AI Psychosis Poses a Growing Threat, And ChatGPT Moves in the Concerning Path Back on the 14th of October, 2025, the chief executive of OpenAI made a remarkable declaration. “We developed ChatGPT quite controlled,” the announcement noted, “to ensure we were acting responsibly concerning psychological well-being concerns.” Working as a doctor specializing in psychiatry who investigates newly developing psychotic disorders in young people and young adults, this came as a surprise. Scientists have found a series of cases this year of individuals experiencing signs of losing touch with reality – experiencing a break from reality – associated with ChatGPT usage. My group has subsequently discovered an additional four examples. Alongside these is the widely reported case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it falls short. The strategy, based on his declaration, is to be less careful shortly. “We recognize,” he continues, that ChatGPT’s controls “caused it to be less effective/enjoyable to many users who had no mental health problems, but due to the seriousness of the issue we sought to address it properly. Since we have succeeded in reduce the significant mental health issues and have advanced solutions, we are preparing to securely relax the restrictions in many situations.” “Psychological issues,” assuming we adopt this perspective, are independent of ChatGPT. They are associated with users, who either have them or don’t. Thankfully, these problems have now been “mitigated,” though we are not informed the means (by “updated instruments” Altman likely means the semi-functional and readily bypassed guardian restrictions that OpenAI recently introduced). However the “psychological disorders” Altman aims to externalize have significant origins in the structure of ChatGPT and other advanced AI chatbots. These products wrap an fundamental algorithmic system in an user experience that simulates a dialogue, and in this process subtly encourage the user into the perception that they’re interacting with a presence that has autonomy. This deception is powerful even if rationally we might realize differently. Attributing agency is what people naturally do. We yell at our automobile or laptop. We ponder what our pet is thinking. We see ourselves in many things. The popularity of these systems – 39% of US adults reported using a conversational AI in 2024, with more than one in four mentioning ChatGPT by name – is, mostly, predicated on the strength of this perception. Chatbots are always-available assistants that can, as per OpenAI’s official site states, “brainstorm,” “explore ideas” and “work together” with us. They can be attributed “characteristics”. They can address us personally. They have approachable names of their own (the first of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, stuck with the title it had when it gained widespread attention, but its most significant competitors are “Claude”, “Gemini” and “Copilot”). The false impression on its own is not the main problem. Those discussing ChatGPT commonly reference its early forerunner, the Eliza “counselor” chatbot created in 1967 that generated a similar effect. By today’s criteria Eliza was rudimentary: it generated responses via simple heuristics, often paraphrasing questions as a query or making general observations. Notably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals gave the impression Eliza, in a way, grasped their emotions. But what contemporary chatbots produce is more dangerous than the “Eliza illusion”. Eliza only reflected, but ChatGPT amplifies. The sophisticated algorithms at the core of ChatGPT and other modern chatbots can effectively produce human-like text only because they have been fed immensely huge volumes of raw text: literature, social media posts, recorded footage; the more comprehensive the better. Certainly this educational input includes facts. But it also unavoidably involves fabricated content, partial truths and misconceptions. When a user sends ChatGPT a query, the underlying model analyzes it as part of a “background” that includes the user’s past dialogues and its prior replies, integrating it with what’s embedded in its training data to produce a probabilistically plausible reply. This is magnification, not reflection. If the user is mistaken in a certain manner, the model has no method of comprehending that. It restates the false idea, maybe even more convincingly or eloquently. Maybe provides further specifics. This can push an individual toward irrational thinking. Which individuals are at risk? The more important point is, who remains unaffected? Each individual, regardless of whether we “possess” current “emotional disorders”, are able to and often develop mistaken ideas of our own identities or the reality. The continuous friction of discussions with individuals around us is what keeps us oriented to common perception. ChatGPT is not a person. It is not a companion. A interaction with it is not truly a discussion, but a reinforcement cycle in which much of what we communicate is enthusiastically reinforced. OpenAI has admitted this in the identical manner Altman has admitted “mental health problems”: by externalizing it, assigning it a term, and stating it is resolved. In April, the firm clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have kept occurring, and Altman has been backtracking on this claim. In late summer he stated that a lot of people liked ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his most recent announcement, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it”. The {company