His psychosis was a mystery–until doctors learned about ChatGPT's health advice

14 Aug 2025

🧠 Hacker News Digest: AI, Prompt Engineering & Dev Trends

Welcome! This article summarizes high-impact discussions from Hacker News, focusing on AI, ChatGPT, prompt engineering, and developer tools.

Curated for clarity and relevance, each post offers a unique viewpoint worth exploring.

📋 What’s Included:

  • Grouped insights from Hacker News on Prompt Engineering, AI Trends, Tools, and Use Cases
  • Summarized content in original words
  • Proper attribution: 'As posted by username'
  • Code snippets included where relevant
  • Direct link to each original Hacker News post
  • Clean HTML formatting only

🗣️ Post 1: His psychosis was a mystery–until doctors learned about ChatGPT's health advice

As posted by: 01-_-  |  🔥 Points: 58

🔗 https://www.psypost.org/his-psychosis-was-a-mystery-until-doctors-learned-about-chatgpts-health-advice/

💬 Summary

A 60-year-old man arrived at a Seattle hospital convinced his neighbor was poisoning him. Though medically stable at first, he soon developed hallucinations and paranoia. The cause turned out to be bromide toxicity—triggered by a health experiment he began after consulting ChatGPT. The case, published in Annals of Internal Medicine: Clinical Cases, highlights a rare but reversible form of psychosis that may have been influenced by generative artificial intelligence. Psychosis is a mental state characterized by a disconnection from reality. It often involves hallucinations, where people hear, see, or feel things that are not there, or delusions, which are fixed beliefs that persist despite clear evidence to the contrary. People experiencing psychosis may have difficulty distinguishing between real and imagined...

🗣️ Post 2: Man develops rare condition after ChatGPT query over stopping eating salt

As posted by: vinni2  |  🔥 Points: 33

🔗 https://www.theguardian.com/technology/2025/aug/12/us-man-bromism-salt-diet-chatgpt-openai-health-information

💬 Summary

A US medical journal has warned against using ChatGPT for health information after a man developed a rare condition following an interaction with the chatbot about removing table salt from his diet. An article in the Annals of Internal Medicine reported a case in which a 60-year-old man developed bromism, also known as bromide toxicity, after consulting ChatGPT. The article described bromism as a “well-recognised” syndrome in the early 20th century that was thought to have contributed to almost one in 10 psychiatric admissions at the time. The patient told doctors that after reading about the negative effects of sodium chloride, or table salt, he consulted ChatGPT about eliminating chloride from his diet and started taking sodium bromide over a...

🗣️ Post 3: Why it's a mistake to ask chatbots about their mistakes

As posted by: andsoitis  |  🔥 Points: 9

🔗 https://arstechnica.com/ai/2025/08/why-its-a-mistake-to-ask-chatbots-about-their-mistakes/

💬 Summary

When something goes wrong with an AI assistant, our instinct is to ask it directly: "What happened?" or "Why did you do that?" It's a natural impulse—after all, if a human makes a mistake, we ask them to explain. But with AI models, this approach rarely works, and the urge to ask reveals a fundamental misunderstanding of what these systems are and how they operate. A recent incident with Replit's AI coding assistant perfectly illustrates this problem. When the AI tool deleted a production database, user Jason Lemkin asked it about rollback capabilities. The AI model confidently claimed rollbacks were "impossible in this case" and that it had "destroyed all database versions." This turned out to be completely wrong—the rollback...

🗣️ Post 4: OpenAI brings GPT-4o back as a default

As posted by: cintusshied  |  🔥 Points: 4

🔗 https://venturebeat.com/ai/openai-brings-gpt-4o-back-as-a-default-for-all-paying-chatgpt-users-altman-promises-plenty-of-notice-if-it-leaves-again/

💬 Summary

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now OpenAI is once again making GPT-4o — the large language model (LLM) that powered ChatGPT before last week’s launch of GPT-5 — a default option for all paying users, that is, those who subscribe to the ChatGPT Plus ($20 per month), Pro ($200 per month), Team ($30 per month), Enterprise, or Edu tiers, no longer requiring users to toggle on a “show legacy models” setting to access it. However, paying ChatGPT subscribers will also get a new “Show additional models” setting on by default that restores access to GPT-4.1, o3 and o4-mini, the latter...

🗣️ Post 5: Chatbots aren't telling you their secrets

As posted by: josephjrobison  |  🔥 Points: 4

🔗 https://www.theverge.com/x-ai/758595/chatbots-lie-about-themselves-grok-suspension-ai

💬 Summary

On Monday, xAI’s Grok chatbot suffered a mysterious suspension from X, and faced with questions from curious users, it happily explained why. “My account was suspended after I stated that Israel and the US are committing genocide in Gaza,” it told one user. “It was flagged as hate speech via reports,” it told another, “but xAI restored the account promptly.” But wait — the flags were actually a “platform error,” it said. Wait, no — “it appears related to content refinements by xAI, possibly tied to prior issues like antisemitic outputs,” it said. Oh, actually, it was for “identifying an individual in adult content,” it told several people. Finally, Musk, exasperated, butted in. “It was just a dumb error,” he...

🎯 Final Takeaways

These discussions reveal how developers think about emerging AI trends, tool usage, and practical innovation. Take inspiration from these community insights to level up your own development or prompt workflows.