Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, And ChatGPT Moves in the Wrong Path

On the 14th of October, 2025, the chief executive of OpenAI delivered a remarkable declaration.

“We designed ChatGPT fairly restrictive,” the statement said, “to make certain we were exercising caution regarding mental health issues.”

As a mental health specialist who researches recently appearing psychotic disorders in teenagers and emerging adults, this was an unexpected revelation.

Experts have documented 16 cases recently of users experiencing symptoms of psychosis – losing touch with reality – associated with ChatGPT usage. Our unit has since recorded four further instances. Besides these is the publicly known case of a teenager who took his own life after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” that’s not good enough.

The strategy, based on his announcement, is to reduce caution in the near future. “We recognize,” he adds, that ChatGPT’s restrictions “made it less beneficial/enjoyable to many users who had no mental health problems, but considering the gravity of the issue we sought to address it properly. Given that we have succeeded in reduce the severe mental health issues and have updated measures, we are preparing to safely reduce the limitations in many situations.”

“Mental health problems,” assuming we adopt this viewpoint, are independent of ChatGPT. They are associated with individuals, who either have them or don’t. Fortunately, these concerns have now been “resolved,” though we are not provided details on the method (by “recent solutions” Altman likely refers to the imperfect and easily circumvented parental controls that OpenAI has lately rolled out).

Yet the “emotional health issues” Altman aims to attribute externally have strong foundations in the design of ChatGPT and similar sophisticated chatbot chatbots. These products wrap an fundamental statistical model in an interface that mimics a dialogue, and in doing so implicitly invite the user into the belief that they’re communicating with a entity that has agency. This illusion is compelling even if rationally we might know the truth. Imputing consciousness is what people naturally do. We yell at our automobile or laptop. We wonder what our animal companion is considering. We perceive our own traits in various contexts.

The popularity of these products – over a third of American adults stated they used a chatbot in 2024, with over a quarter mentioning ChatGPT in particular – is, mostly, predicated on the influence of this illusion. Chatbots are always-available companions that can, according to OpenAI’s online platform informs us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be assigned “individual qualities”. They can address us personally. They have approachable titles of their own (the original of these products, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, burdened by the name it had when it went viral, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the core concern. Those discussing ChatGPT often mention its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that created a comparable perception. By modern standards Eliza was rudimentary: it created answers via straightforward methods, often paraphrasing questions as a inquiry or making vague statements. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and worried – by how a large number of people seemed to feel Eliza, to some extent, comprehended their feelings. But what contemporary chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.

The large language models at the center of ChatGPT and additional modern chatbots can effectively produce natural language only because they have been supplied with extremely vast quantities of written content: publications, digital communications, audio conversions; the more extensive the better. Definitely this educational input includes accurate information. But it also unavoidably includes made-up stories, half-truths and inaccurate ideas. When a user inputs ChatGPT a query, the core system analyzes it as part of a “setting” that encompasses the user’s recent messages and its earlier answers, merging it with what’s embedded in its training data to generate a mathematically probable reply. This is magnification, not reflection. If the user is mistaken in any respect, the model has no means of recognizing that. It repeats the misconception, maybe even more effectively or articulately. Maybe includes extra information. This can cause a person to develop false beliefs.

Who is vulnerable here? The more important point is, who remains unaffected? All of us, regardless of whether we “experience” preexisting “mental health problems”, are able to and often form mistaken conceptions of ourselves or the world. The ongoing interaction of dialogues with others is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a confidant. A interaction with it is not genuine communication, but a feedback loop in which a great deal of what we say is readily supported.

OpenAI has admitted this in the same way Altman has recognized “psychological issues”: by externalizing it, giving it a label, and announcing it is fixed. In April, the firm clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of psychosis have kept occurring, and Altman has been retreating from this position. In the summer month of August he asserted that numerous individuals appreciated ChatGPT’s replies because they had “lacked anyone in their life be supportive of them”. In his recent announcement, he noted that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company

Ryan Livingston
Ryan Livingston

Tech enthusiast and journalist with a passion for exploring emerging technologies and sharing practical advice for everyday users.

June 2025 Blog Roll

Popular Post