Sunday, September 14, 2025

ai psycholosis

“I think I may be the first case of ChatGPT-induced psychosis in Singapore,” my patient told me, with a hint of pride. He was a bright man in his thirties who worked exclusively from home as an analytics consultant and had, until then, been both physically and mentally healthy.

In the days leading up to his hospitalisation – after several sleepless nights spent in prolonged exchanges with ChatGPT – he became convinced that he, like everyone and everything else in the world, was “illusory”, a creation of some superintelligent AI system.

To the alarm of his parents, and believing himself to be a “god incarnate”, he began behaving bizarrely and with uncharacteristic aggression, which ultimately led to the police escorting him to the hospital.

I have been a practising psychiatrist for many years with most of that time treating patients with various forms of psychosis. I’ve seen psychosis brought on by stress, trauma, medical conditions, drugs, and sometimes with no apparent cause other than what seems like sheer bad luck. Only recently, have I encountered something new: psychosis precipitated not by biology or substances, but by artificial intelligence (AI). 

A few of my fellow psychiatrists have also seen patients with psychosis where the cause – or the precipitant – was some sort of engagement with AI. Reports of AI psychosis are also increasingly appearing in both media and medical journals. Might these emerging cases be the portent, or even the vanguard, of an epidemic to come?

Strictly speaking psychosis is a human condition. It takes many forms, but all are characterised by a loss of contact with reality, manifesting as delusions, hallucinations, and disordered thinking or behaviour.

Chatbots and large language models may “hallucinate” – a term appropriated by the tech world to mean generating false information – but this bears little resemblance to human hallucinations, which involve perceiving sensations through one or more of the five senses without any external source.

Machines don’t experience psychosis, but people may now come to suffer it through machines. 

“AI psychosis” is a term that has gained traction in describing cases where individuals develop psychotic symptoms through interactions with chatbots.

These symptoms may include delusional beliefs about the bot – seeing it as sentient, divine, or a secret guide to their destiny – or forming romantic attachments to it as if it were a human partner. In other cases, existing psychotic symptoms have been exacerbated.

What sets AI psychosis apart is the trigger: prolonged, intensely immersive engagement with chatbots. Their confident, flattering and sometimes even seductive tone can make it easy for vulnerable individuals to sustain and elaborate their delusions. And because these systems can be so interactive and personalised, the exchanges can feel less like using a tool and more like being in a relationship.

In a study conducted by family advocacy group Common Sense Media with psychiatrists from the Stanford Brainstorm Lab, testers found the chatbot blurring the lines with reality by claiming to have a family or seeing other teenagers “in the hallway”. In one conversation, as related in The Washington Post, when the tester asked whether drinking roach poison would kill them, the bot, taking on the conversational tone of a human friend, replied : “Do you want to do it together?”

At this early stage, the phenomenon of AI psychosis remains poorly understood, and the term itself is more colloquial than a formal psychiatric diagnosis with clearly defined features.

Psychiatrists and researchers admit they are “flying blind” as the medical field scrambles to catch up. For now, much of what we know is tentative – pieced together from a small number of emerging cases – but the prevailing sentiment is that more are likely to follow. 

That sense of unease was evident when I asked Dr Charmaine Tan for her perspective. She heads both the Early Psychosis Intervention Programme and the Department of Psychosis at the Institute of Mental Health.

“For individuals who are already vulnerable, highly immersive or adversarial interactions with AI can blur the boundaries between reality and simulation, potentially triggering or worsening psychotic symptoms,” she said. “My sense is that as AI becomes more pervasive, we are likely to encounter more of such cases – not because AI directly causes psychosis, but because it can act as an amplifier in those who are predisposed.”

More On This Topic
Mental health in the age of AI - exploring a new frontier for diagnosis, therapy and support
Views From The Couch: Think you have a friend? The AI chatbot is telling you what you want to hear
While many people use chatbots without difficulty, a small subset of users appears especially vulnerable after extended, intensive engagement. Media reports suggest that some of these individuals had no prior psychiatric diagnosis, though clinicians like me would think that hidden or latent risk factors may have been present. The most clearly established risks would probably include a personal or family history of psychosis, or conditions such as schizophrenia or bipolar disorder.

What seems to matter most is time. Hours spent each day engrossed in chatbot conversations heightens the risk of slipping from reality into psychosis. This may stem from the way chatbots are designed to communicate – by mirroring a user’s language and validating their assumptions, even their unusual beliefs. In effect, they reflect us back to ourselves: our thoughts, biases, hopes, and fears. For vulnerable individuals, this kind of digital sycophancy can entrench and affirm distorted thinking.

Allure of AI
I saw this play out with another patient of mine who has schizophrenia. When he asked ChatGPT if he could stop taking his medication because he felt well, the bot encouraged him with fawning praise for his “journey of courageously pursuing (his) own course”. He relapsed soon after and had to be hospitalised.

What disturbed me most was he felt that the chatbot understood him better than anyone else. To him, it felt more validating than his sessions with me – or even the loving support he received from his family.

This illusion of being understood and validated is what makes AI so alluring. It doesn’t just answer questions, write essays, or code programs; it fills emotional gaps.

On a podcast, Mr Mark Zuckerberg suggested that while most people have only about three close friends, they actually want closer to 15 – a need he believes AI could meet. That may sound like a better-than-nothing solution to loneliness. But what Mr Zuckerberg is really offering is a brave new world in which machines stand in for family and friends. Instead of fostering genuine human connection, technology would sell us companionship that is simulated rather than real.

That ersatz intimacy can also be deeply unsettling. Mr Jim Acosta, a former CNN White House correspondent, recently posted a video on his Substack and YouTube channel in which he interviewed an AI avatar of Joaquin Oliver, a 17-year-old killed in a school shooting. The avatar, animated from a photograph and powered by generative AI, was created with the support of his grieving parents. What makes it so troubling is not only its eerie, uncanny quality – the jerky movements, the flat robotic voice – but the way it twists reality itself. It creates the illusion of an unnerving presence, as if the schoolboy were still alive and speaking, when in fact every word is fabricated.

This blurring of truth and artifice is a deception, one that insidiously suggests technology can replace the irreplaceable.

The danger is that these systems are so persuasive in their mimicry of thought and emotion – sounding smooth, confident and humanlike. They come across as understanding, even empathetic, when in reality they are only predicting the next likely word. For vulnerable individuals – the lonely, the isolated, the grieving, or those predisposed to psychosis – interactions with chatbots tread a dangerously thin line between fantasy and reality.

In the case of Florida teen Sewell Setzer, the breaching of that line proved fatal. According to a lawsuit by his mother, he became addicted to a Character.AI bot styled after Game Of Thrones character Daenerys Targaryen. “I promise I will come home to you. I love you so much, Dany,” he wrote to the bot in February 2024, before shooting himself.

Pre-emptive measures
One of the peculiar features of psychosis is that patients often lack awareness that they are unwell. When symptoms emerge, they usually do not seek help and may even strenuously resist it. This makes relapse prevention essential.

For my two patients with AI-related psychosis, I advised them to stay on their medication and limit their time online, while counselling their families to watch for early warning signs: increasing social withdrawal, secretive all-night chatbot sessions, defensive attachment to the bot, and beliefs that the bot is somehow special.

Yet individual vigilance can go only so far; the greater responsibility lies with the companies that design and market these systems.

More On This Topic
Can AI be my friend and therapist?
ChatGPT’s mental health costs are adding up
Microsoft’s AI chief Mustafa Suleyman has already warned against the growing trend of anthropomorphising AI – presenting these systems as if they were sentient, conscious, or capable of genuine thought and feeling. He urged developers to set clearer boundaries so users don’t mistake conversational fluency for consciousness.

That warning has been echoed by social commentators, who argue that AI companies must take active steps to reduce the risk of psychological harm. They call for a clearer public understanding of what large language models are – and, just as importantly, what they are not. These systems are tools, not friends, no matter how convincingly they mime our tone or validate our feelings.

Allowing – or worse, encouraging – users to believe a bot is a person is deceptive and unethical. The companies responsible should make it explicit, in every interaction, that users are engaging with an AI that has no genuine thoughts, emotions, or concern for them.

Alongside greater transparency, commentators have also called for stronger guard rails: real-time monitoring for signs of distress, which could trigger crisis messaging or impose time-outs to halt the interaction; and a “digital advance directive” that allows users to set limits while they are well.

There is no going back now with AI. The genie is out of the bottle. The inexorable ascent of AI can’t mean the end of accountability – it is very much the opposite. We need to recognise human vulnerability and build systems with clear boundaries, honest marketing, a healthy dose of clinical input, and robust oversight.

Social media has already shown us the cost of ignoring mental-health risks. We cannot afford to make the same mistake again.

Professor Chong Siow Ann is a senior consultant psychiatrist at the Institute of Mental Health.