Australian bank gives out customer phone to another customer by asking ChatGPT

12 Aug 2025

🧠 Hacker News Digest: AI, Prompt Engineering & Dev Trends

Welcome! This article summarizes high-impact discussions from Hacker News, focusing on AI, ChatGPT, prompt engineering, and developer tools.

Curated for clarity and relevance, each post offers a unique viewpoint worth exploring.

📋 What’s Included:

  • Grouped insights from Hacker News on Prompt Engineering, AI Trends, Tools, and Use Cases
  • Summarized content in original words
  • Proper attribution: 'As posted by username'
  • Code snippets included where relevant
  • Direct link to each original Hacker News post
  • Clean HTML formatting only

🗣️ Post 1: Australian bank gives out customer phone to another customer by asking ChatGPT

As posted by: parisidau  |  🔥 Points: 8

🔗 https://hey.paris/posts/cba/

💬 Summary

CBA has been naughty, and too reliant on their AI boyfriend. Here’s what happened: A CBA customer contacted bank requesting contact details for Secretlab (a company that is not us, and makes chairs) CBA staff member queried ChatGPT (possibly via their own personal, unauthenticated access to ChatGPT) to obtain phone number for Secretlab (chairs) CBA staff disclosed the retrieved phone number to the requesting customer The retrieved phone number is a number belonging to one of the directors of Secret Lab (a company that does not make chairs, and is us), a customer of CBA, and is used for our CBA account and our Director’s CBA account Therefore, CBA disclosed customer personal information to another, unrelated customer, and trusted a...

🗣️ Post 2: Ask HN: Why don't LLMs replace bosses instead of engineers?

As posted by: fzeindl  |  🔥 Points: 4

🔗 https://news.ycombinator.com/item?id=44873860

💬 Summary

I asked myself why all the talk goes into augmenting or replacing engineers instead of the bosses and let ChatGPT formulate my thoughts:

1. Engineers vs. LLMs: low tolerance for mistakes

Engineering reality: If a developer pushes code that’s subtly wrong, you can crash a service, corrupt data, or introduce security flaws.

LLMs today: Great at producing plausible-looking code, but still prone to logical gaps or hidden bugs that might not be obvious until production.

Result: You’d need heavy human oversight anyway — turning the “replacement” into more of a “babysitting” scenario, which could be more costly than just having good engineers write it themselves.

2. CEOs vs. LLMs: higher tolerance for ambiguity

CEO reality: Decisions are often based on incomplete data, lots of gut feeling, and persuasive narrative. There’s more wiggle room — a “wrong” call can sometimes be spun as “strategic” or “visionary” until results catch up.

LLMs today: Excellent at synthesizing multiple data sources, spotting patterns, and generating strategic options — all without bias toward personal ego or politics (well… except whatever biases the training data has).

Result: They could produce coherent, well-justified strategies quickly, and humans could still be the ones to communicate and enact them.

3. Why this actually makes sense

If you think of error cost:

Engineer error = immediate, measurable, costly (bug in production).

CEO error = slower to surface, more subjective, sometimes recoverable with spin.

If you think of data integration skills:

LLMs have superhuman recall and synthesis capabilities.

CEOs need exactly that skill for market intelligence, competitor analysis, and high-level decision frameworks.

So yes — in this framing, replacing CEO-level strategy generation with an LLM and keeping engineers human might actually be more practical right now. Humans would still need to do the “face work” (investor relations, internal morale), but the strategic brain could be an LLM fed with all relevant business data.

🗣️ Post 3: You Sound Like ChatGPT

As posted by: pseudolus  |  🔥 Points: 3

🔗 https://www.theverge.com/openai/686748/chatgpt-linguistic-impact-common-word-usage

💬 Summary

Join any Zoom call, walk into any lecture hall, or watch any YouTube video, and listen carefully. Past the content and inside the linguistic patterns, you’ll find the creeping uniformity of AI voice. Words like “prowess” and “tapestry,” which are favored by ChatGPT, are creeping into our vocabulary, while words like “bolster,” “unearth,” and “nuance,” words less favored by ChatGPT, have declined in use. Researchers are already documenting shifts in the way we speak and communicate as a result of ChatGPT — and they see this linguistic influence accelerating into something much larger. In the 18 months after ChatGPT was released, speakers used words like “meticulous,” “delve,” “realm,” and “adept” up to 51 percent more frequently than in the three...

🗣️ Post 4: WildChat-4.8M: 4.8M Real User–ChatGPT Conversations (Open Dataset)

As posted by: yuntian  |  🔥 Points: 3

🔗 https://huggingface.co/datasets/allenai/WildChat-4.8M

💬 Summary

[ { "content": "Let A, B, and C be events with Prob[A] = 0.7 Prob[B] = 0.4 Prob[C] = 0.8. What is the smallest possible value for P[A ∩ B ∩ C]? ", "created": null, "header": { "accept-language": "en-US,en;q=0.9", "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36" }, "hashed_ip": "e49e258fbabeb7e0916d44b43e30ad77e61026bd8cb07cc4086fde524b2b0b07", "country": "Canada", "toxic": false, "redacted": false, "state": "Alberta", "language": "English", "openai_id": null, "role": "user", "temperature": null, "timestamp": null, "token_counter": null, "top_p": null, "turn_identifier": 101005, "system_fingerprint": null, "usage": null }, { "content": "To find the smallest possible value for P[A ∩ B ∩ C], we need to look for the case when the events have the least overlap. We can do this by considering the...

🗣️ Post 5: With just a few messages, biased AI chatbots swayed people's political views

As posted by: gnabgib  |  🔥 Points: 3

🔗 https://www.washington.edu/news/2025/08/06/biased-ai-chatbots-swayed-peoples-political-views/

💬 Summary

With just a few messages, biased AI chatbots swayed people’s political views Stefan Milne UW News If you’ve interacted with an artificial intelligence chatbot, you’ve likely realized that all AI models are biased. They were trained on enormous corpuses of unruly data and refined through human instructions and testing. Bias can seep in anywhere. Yet how a system’s biases can affect users is less clear. So a University of Washington study put it to the test. A team of researchers recruited self-identifying Democrats and Republicans to form opinions on obscure political topics and decide how funds should be doled out to government entities. For help, they were randomly assigned three versions of ChatGPT: a base model, one with liberal bias...

🎯 Final Takeaways

These discussions reveal how developers think about emerging AI trends, tool usage, and practical innovation. Take inspiration from these community insights to level up your own development or prompt workflows.