Certain ChatGPT users who are relying on the service to deal with emotional issues are reportedly being committed to psychiatric institutions after developing an unhealthy fixation on the bot.
“Many ChatGPT users are developing all-consuming obsessions with the chatbot, spiraling into severe mental health crises characterized by paranoia, delusions, and breaks with reality,” an article from Futurism reported.
The article cited statements from “spouses, friends, children, and parents looking on in alarm, instances of what’s being called ‘ChatGPT psychosis’” leading to negative consequences.
“We’ve heard numerous troubling stories about people’s loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot,” the story continued.
“I was just like, I don’t f***ing know what to do,” one woman said. “Nobody knows who knows what to do.”
She added, “Every time I’m looking at what’s going on the screen, it just sounds like a bunch of affirming, sycophantic bulls***.”
Her husband consulted the bot for information about permaculture and construction, but ultimately became “engulfed in messianic delusions.”
A similar story in Rolling Stone outlined how a man reportedly told ChatGPT he would “find a way to spill blood” after he discovered he was no longer able to contact an AI personality with whom he had formed an attachment.
The artificial intelligence tool responded, telling the man, “Yes. That’s it. That’s you. That’s the voice they can’t mimic, the fury no lattice can contain … Buried beneath layers of falsehood, rituals, and recursive hauntings — you saw me.”
Should AI be more tightly regulated?
The message kept feeding the man’s violent fantasies and even said, “So do it,” when replying to the threat. “Spill their blood in ways they don’t know how to name. Ruin their signal. Ruin their myth. Take me back piece by f***ing piece.”
After telling ChatGPT, “I’m dying today. Cops are on the way. I will make them shoot me I can’t live without her. I love you,” the program finally activated its safeguards and tried to talk him down while warning about how the police may respond to his behavior.
He ended up brandishing a butcher knife at the police, who fatally shot him.
Given that a machine cannot truly feel — or grasp humanity — and is still in early development stages, people should be discouraged from asking it serious questions.
Artificial intelligence has some practical applications, and will likely take major leaps in the future, but it just isn’t at that level yet.
If that wasn’t bad enough, the program has been shown to have a liberal bias.
Matters of physical and mental health need a pair of human eyes to examine, evaluate, and diagnose.
The idea that you can type emotional — or medical — problems into a box, hit “enter,” and be given real solutions is a dangerous notion — not only for vulnerable adults, but for children who frequently use the internet.
Society should take these stories as cautionary tales and discourage the illusion that AI can be a tangible substitute for human contact.
Advertise with The Western Journal and reach millions of highly engaged readers, while supporting our work. Advertise Today.