
A 60-year-old man has been hospitalized due to bromide poisoning after following ChatGPT's suggestion to replace table salt (sodium chloride) with sodium bromide for seasoning. The incident, reported by The New York Post, highlights the risks of relying on artificial intelligence for health advice without critical evaluation.
The man, who had studied nutrition during his university years and read about the adverse health effects of table salt, decided to experiment by completely eliminating table salt from his diet and substituting it with sodium bromide, purchased online.
Sodium bromide, while visually similar to table salt, is primarily used in industrial and medical applications.
The man was admitted to the hospital after three months of the diet swap, amid concerns that his neighbor was poisoning him.
He disclosed to doctors that he followed several dietary restrictions, such as drinking only homemade distilled water. Despite complaining of thirst, he expressed skepticism about the water provided by the hospital, suspecting it was contaminated.
The man had no prior history of mental illness, but within 24 hours of hospitalization, his paranoia worsened, and he began experiencing hallucinations and auditory delusions, even attempting to escape at one point. After receiving appropriate treatment, he was involuntarily admitted to the hospital's inpatient psychiatry unit.
The medical team diagnosed the man with bromide poisoning and reported additional symptoms after his condition improved, including acne, fatigue, insomnia, ataxia (loss of muscle coordination), and severe thirst—typical signs of bromide toxicity.
Last week, the case was published in the journal "Annals of Internal Medicine Clinical Cases." The research team issued a warning, stating: "It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation."
Although OpenAI specifies in its terms of use that ChatGPT is not intended to diagnose or treat any health conditions, a survey conducted this year revealed that over 35% of Americans use AI to learn about or manage their health. Alarmingly, 63% of respondents considered AI-generated health advice to be trustworthy.
Related News:
Elon Musk's Grok 4 model allegedly outperforms ChatGPT, launch expected next week
Comment