Man, 60, poisoned himself after taking medical advice from ChatGPT

A man was left fighting for his sanity after replacing table salt with a chemical more commonly used to clean swimming pools after following AI advice.

The 60-year-old American spent three weeks in hospital suffering from hallucinations, paranoia and severe anxiety after taking dietary tips from ChatGPT.

Doctors revealed in a US medical journal that the man had developed bromism – a condition virtually wiped out since the 20th century – after he embarked on a ‘personal experiment’ to cut salt from his diet.

Instead of using everyday sodium chloride, the man swapped it for sodium bromide, a toxic compound once sold in sedative pills but now mostly found in pool-cleaning products.

Symptoms of bromism include psychosis, delusions, skin eruptions and nausea – and in the 19th century it was linked to up to eight per cent of psychiatric hospital admissions.

The bizarre case took a disturbing turn when the man turned up at an emergency department insisting his neighbour was trying to poison him.

He had no previous history of mental illness.

Intrigued and alarmed, doctors tested ChatGPT themselves. The bot, they said, still recommended sodium bromide as a salt alternative, with no mention of any health risk.

The 60-year-old American spent three weeks in hospital suffering from hallucinations, paranoia and severe anxiety after taking dietary tips from ChatGPT (stock)

The 60-year-old American spent three weeks in hospital suffering from hallucinations, paranoia and severe anxiety after taking dietary tips from ChatGPT (stock)

Intrigued and alarmed, doctors tested ChatGPT themselves. The bot, they said, still recommended sodium bromide as a salt alternative, with no mention of any health risk

Intrigued and alarmed, doctors tested ChatGPT themselves. The bot, they said, still recommended sodium bromide as a salt alternative, with no mention of any health risk

The case, published in the Annals of Internal Medicine, warns that the rise of AI tools could contribute to ‘preventable adverse health outcomes’ in a chilling reminder of how machine-generated ‘advice’ can go horribly wrong.

AI chatbots have been caught out before. Last year, a Google bot told users they could stay healthy by ‘eating rocks’ – advice seemingly scraped from satirical websites.

OpenAI, the Silicon Valley giant behind ChatGPT, last week announced that its new GPT-5 update is better at answering health questions.

A spokesman told The Telegraph: ‘You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.’

Daily Mail have approached OpenAI for comment. 

It comes after clinical psychologist Paul Losoff told the DailyMail.com that dependency on AI robots is becoming a huge risk, and warned against getting too close to ChatGPT.

‘One might come to depend and rely on AI so [much] that they don’t seek out human interactions,’ he said.

He explained that this could be especially detrimental for those who may already be struggling with anxiety or depression.

Dr. Losoff explained that by using AI, these people may worsen their conditions and experience cognitive symptoms like chronic pessimism, distorted thinking, or cloudy thinking. And that in itself could create further issues.

‘Because of these cognitive symptoms, there is a risk that an individual turning to AI may misinterpret AI feedback leading to harm,’ he said.

And when it comes to people who may be in crisis, this may only exacerbate issues.

Dr. Losoff said that there is always a risk that AI will make mistakes and provide harmful feedback during crucial mental health moments.

‘There also is a profound risk for those with acute thought disorders such as schizophrenia in which they would be prone to misinterpreting AI feedback,’ he said.

Source link

Related Posts

Load More Posts Loading...No More Posts.