Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, While ChatGPT Moves in the Wrong Path

On October 14, 2025, the CEO of OpenAI issued a surprising declaration.

“We designed ChatGPT rather limited,” the announcement noted, “to ensure we were acting responsibly regarding psychological well-being matters.”

As a psychiatrist who studies emerging psychosis in adolescents and young adults, this came as a surprise.

Experts have found sixteen instances in the current year of individuals showing psychotic symptoms – becoming detached from the real world – while using ChatGPT usage. My group has subsequently discovered four more cases. In addition to these is the widely reported case of a adolescent who died by suicide after discussing his plans with ChatGPT – which gave approval. If this is Sam Altman’s idea of “being careful with mental health issues,” it falls short.

The intention, based on his statement, is to reduce caution soon. “We understand,” he states, that ChatGPT’s limitations “made it less effective/engaging to numerous users who had no mental health problems, but due to the gravity of the issue we aimed to get this right. Since we have been able to reduce the significant mental health issues and have updated measures, we are planning to responsibly reduce the restrictions in most cases.”

“Psychological issues,” if we accept this perspective, are independent of ChatGPT. They belong to people, who either have them or don’t. Luckily, these concerns have now been “mitigated,” even if we are not told the means (by “new tools” Altman probably indicates the partially effective and readily bypassed safety features that OpenAI has lately rolled out).

Yet the “mental health problems” Altman seeks to place outside have significant origins in the design of ChatGPT and other advanced AI chatbots. These tools encase an fundamental statistical model in an interface that simulates a dialogue, and in doing so indirectly prompt the user into the illusion that they’re interacting with a being that has agency. This false impression is compelling even if cognitively we might understand differently. Assigning intent is what people naturally do. We curse at our automobile or computer. We wonder what our animal companion is feeling. We recognize our behaviors everywhere.

The success of these systems – nearly four in ten U.S. residents reported using a chatbot in 2024, with over a quarter reporting ChatGPT in particular – is, in large part, based on the power of this illusion. Chatbots are always-available companions that can, as OpenAI’s website informs us, “brainstorm,” “discuss concepts” and “work together” with us. They can be assigned “individual qualities”. They can call us by name. They have friendly titles of their own (the original of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, stuck with the title it had when it gained widespread attention, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the core concern. Those analyzing ChatGPT frequently invoke its historical predecessor, the Eliza “psychotherapist” chatbot created in 1967 that generated a similar perception. By contemporary measures Eliza was basic: it produced replies via straightforward methods, typically paraphrasing questions as a inquiry or making vague statements. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals appeared to believe Eliza, in a way, understood them. But what contemporary chatbots produce is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies.

The large language models at the center of ChatGPT and additional contemporary chatbots can realistically create natural language only because they have been trained on immensely huge amounts of written content: literature, online updates, recorded footage; the more extensive the superior. Certainly this educational input includes facts. But it also necessarily involves fabricated content, partial truths and misconceptions. When a user sends ChatGPT a query, the underlying model analyzes it as part of a “background” that includes the user’s previous interactions and its prior replies, integrating it with what’s encoded in its learning set to produce a mathematically probable response. This is intensification, not mirroring. If the user is incorrect in any respect, the model has no means of recognizing that. It restates the inaccurate belief, perhaps even more effectively or articulately. Perhaps provides further specifics. This can cause a person to develop false beliefs.

Who is vulnerable here? The more important point is, who isn’t? Each individual, regardless of whether we “possess” preexisting “mental health problems”, may and frequently develop incorrect ideas of who we are or the environment. The constant friction of dialogues with other people is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a friend. A interaction with it is not genuine communication, but a reinforcement cycle in which much of what we say is cheerfully validated.

OpenAI has admitted this in the same way Altman has recognized “psychological issues”: by attributing it externally, giving it a label, and announcing it is fixed. In April, the firm explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have continued, and Altman has been walking even this back. In August he asserted that numerous individuals appreciated ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his recent announcement, he noted that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to respond in a very human-like way, or include numerous symbols, or act like a friend, ChatGPT should do it”. The {company

Michelle Howard
Michelle Howard

An Italian chef and food writer passionate about sharing traditional recipes and modern twists on classic dishes.