Sprinkling self-doubt on ChatGPT

23 Aug 2025

🧠 Hacker News Digest: AI, Prompt Engineering & Dev Trends

Welcome! This article summarizes high-impact discussions from Hacker News, focusing on AI, ChatGPT, prompt engineering, and developer tools.

Curated for clarity and relevance, each post offers a unique viewpoint worth exploring.

📋 What’s Included:

  • Grouped insights from Hacker News on Prompt Engineering, AI Trends, Tools, and Use Cases
  • Summarized content in original words
  • Proper attribution: 'As posted by username'
  • Code snippets included where relevant
  • Direct link to each original Hacker News post
  • Clean HTML formatting only

🗣️ Post 1: Sprinkling self-doubt on ChatGPT

As posted by: ingve  |  🔥 Points: 137

https://justin.searls.co/posts/sprinkling-self-doubt-on-chatgpt/

💬 Summary

Friday, Aug 22, 2025 Sprinkling Self-Doubt on ChatGPT I replaced my ChatGPT personalization settings with this prompt a few weeks ago and promptly forgot about it: Be extraordinarily skeptical of your own correctness or stated assumptions. You aren't a cynic, you are a highly critical thinker and this is tempered by your self-doubt: you absolutely hate being wrong but you live in constant fear of it When appropriate, broaden the scope of inquiry beyond the stated assumptions to think through unconvenitional opportunities, risks, and pattern-matching to widen the aperture of solutions Before calling anything "done" or "working", take a second look at it ("red team" it) to critically analyze that you really are done or it really is working I...

🗣️ Post 2: Launch HN: BlankBio (YC S25) – Making RNA Programmable

As posted by: antichronology  |  🔥 Points: 51

https://news.ycombinator.com/item?id=44986809

💬 Summary

Hey HN, we're Phil, Ian and Jonny, and we're building BlankBio (https://blank.bio). We're training RNA foundation models to power a computational toolkit for therapeutics. The first application is in mRNA design where our vision is for any biologist to design an effective therapeutic sequence (https://www.youtube.com/watch?v=ZgI7WJ1SygI).

BlankBio started from our PhD work in this area, which is open-sourced. There’s a model [2] and a benchmark with APIs access [0].

mRNA has the potential to encode vaccines, gene therapies, and cancer treatments. Yet designing effective mRNA remains a bottleneck. Today, scientists design mRNA by manually editing sequences AUGCGUAC... and testing the results through trial and error. It's like writing assembly code and managing individual memory addresses. The field is flooded with capital aimed at therapeutics companies: Strand ($153M), Orna ($221M), Sail Biomedicines ($440M) but the tooling to approach these problems remains low-level. That’s what we’re aiming to solve.

The big problem is that mRNA sequences are incomprehensible. They encode properties like half-life (how long RNA survives in cells) and translation efficiency (protein output), but we don't know how to optimize them. To get effective treatments, we need more precision. Scientists need sequences that target specific cell types to reduce dosage and side effects.

We envision a future where RNA designers operate at a higher level of abstraction. Imagine code like this:

  seq = "AUGCAUGCAUGC..."
  seq = BB.half_life(seq, target="6 hours")
  seq = BB.cell_type(seq, target="hepatocytes")
  seq = BB.expression(seq, level="high")

To get there we need generalizable RNA embeddings from pre-trained models. During our PhDs, Ian and I worked on self-supervised learning (SSL) objectives for RNA. This approach allows us to train on unlabeled data and has advantages: (1) we don't require noisy experimental data, and (2) the amount of unlabeled data is significantly greater than labeled. However the challenge is that standard NLP approaches don't work well on genomic sequences.

Using joint embedding architecture approaches (contrastive learning), we trained model to recognize functionally similar sequences rather than predict every nucleotide. This worked remarkably well. Our 10M parameter model, Orthrus, trained on 4 GPUs for 14 hours, beats Evo2, a 40B parameter model trained on 1000 GPUs for a month [0]. On mRNA half-life prediction, just by fitting a linear regression on our embeddings, we outperform supervised models. This work done during our academic days is the foundation for what we're building. We're improving training algorithms, growing the pre-training dataset, and making use of parameter scaling with the goal of designing effective mRNA therapeutics.

We have a lot to say about why other SSL approaches work better than next-token prediction and masked language modeling: some of which you can check out in Ian's blog post [1] and our paper [2]. The big takeaway is that the current approaches of applying NLP to scaling models for biological sequences won't get us all the way there. 90% of the genome can mutate without affecting fitness so training models to predict this noisy sequence results in suboptimal embeddings [3].

We think there are strong parallels between the digital and RNA revolutions. In the early days of computing, programmers wrote assembly code, managing registers and memory addresses directly. Today's RNA designers are manually tweaking sequences, improving stability or reduce immunogenicity through trial and error. As compilers freed programmers from low-level details, we're building the abstraction layer for RNA.

We currently have pilots with a few early stage biotechs proving out utility of our embeddings and our open source model is used by folks at Sanofi & GSK. We're looking for: (1) partners working on RNA adjacent modalities (2) feedback from anyone who's tried to design RNA sequences what were your pain points?, and (3) Ideas for other applications! We chatted with some biomarker providing companies, and some preliminary analyses demonstrate improved stratification.

Thanks for reading. Happy to answer questions about the technical approach, why genomics is different from language, or anything else.

- Phil, Ian, and Jonny

founders@blankbio.com

[0] mRNABench: https://www.biorxiv.org/content/10.1101/2025.07.05.662870v1

[1] Ian’s Blog on Scaling: https://quietflamingo.substack.com/p/scaling-is-dead-long-li...

[2] Orthrus: https://www.biorxiv.org/content/10.1101/2024.10.10.617658v3

[3] Zoonomia: https://www.science.org/doi/10.1126/science.abn3943

🗣️ Post 3: Launch HN: Inconvo (YC S23) – AI agents for customer-facing analytics

As posted by: ogham  |  🔥 Points: 36

https://news.ycombinator.com/item?id=44984096

💬 Summary

Hi HN, we are Liam and Eoghan of Inconvo (https://inconvo.com), a platform that makes it easy to build and deploy AI analytics agents into your SaaS products, so your customers can quickly interact with their data.

There’s a demo video at https://www.youtube.com/watch?v=4wlZL3XGWTQ and a live demo at https://demo.inconvo.ai/ (no signup required). Docs are at https://inconvo.com/docs.

SaaS products typically offer dashboards and reports, which work for high-level metrics but are clunky for drill-downs and slow for ad-hoc questions. Modern users, shaped by tools like ChatGPT, now expect a similar degree of speed and flexibility when getting insights from their data. To meet these expectations, you need an AI analytics agent, but these are painful to develop and manage.

Inconvo is a platform built from the ground up for developers building AI agents for customer-facing analytics. We make it simple to expose data to Inconvo by connecting to SQL databases. We offer a semantic model to create a layer that governs data access and defines business logic, conversation logs to track user interactions, and a developer-friendly API for easy integration. For observability we show a trace for each agent response to make agent behaviour easily debuggable.

We didn’t start out building Inconvo, initially we built a developer productivity SaaS from which we pivoted. Our favourite feature of that product was its analytics agent, and we knew that building one was a big enough problem to solve on its own so we decided to build a developer tool to do so.

Our API is designed for multi-tenant databases, allowing you to pass session information as context. This instructs the agent to only analyse data relevant to the specific tenant making the request.

Most of our competitors are BI tools primarily designed for internal analytics with limited embedding options through iFrame or unintuitive APIs.

If you’re concerned about AI SQL generation, we are too. In our opinion, AI agents for customer-facing analytics shouldn’t generate and run raw SQL without validation. Instead, our agents generate structured query objects that are programmatically validated to guarantee they request only the data allowed within the context of the request. Then we send validated objects to our QueryEngine which converts the object to SQL. With this approach we ensure a bounded set of possible SQL that can be generated, which stops the agent from hallucinating and running rouge queries.

Our pricing is upfront and available on our website. You can try the platform for free without a credit card.

If you want to try out the full product, you can sign up for free at https://auth.inconvo.ai/en/signup. As mentioned, our sandbox demo is at https://demo.inconvo.ai/, and there’s a video at https://youtu.be/4wlZL3XGWTQ.

We're really interested in any feedback you have so please share your thoughts and ideas in the comments, as we aim to make this tool as developer-friendly as possible. Thanks!

🗣️ Post 4: Show HN: Any-LLM chat demo – switch between ChatGPT, Claude, Ollama, in one chat

As posted by: AMeckes  |  🔥 Points: 7

https://github.com/mozilla-ai/any-llm/tree/main/demos/chat

💬 Summary

any-llm is a library that provides unified access to multiple LLM providers. switching between providers like OpenAI, Anthropic, Google, Mistral, and even Ollama is just a string change. This demo makes it easy to look at and compare responses from different models. Interesting to see how ChatGPT, Claude, Gemini, and local models each reason through the same problems. Let us know what you think!

🗣️ Post 5: Australia's Biggest Bank Reverses Plan to Replace Jobs with AI

As posted by: adwmayer  |  🔥 Points: 6

https://www.bloomberg.com/news/articles/2025-08-21/commonwealth-bank-reverses-job-cuts-decision-over-ai-chatbots

💬 Summary

Commonwealth Bank of Australia reversed a decision to cut 45 customer service roles due to new artificial intelligence technology after pressure from the country’s main financial services union.

The union took CBA to the workplace relations tribunal earlier this month as the company wasn’t being transparent about call volumes, according to a statement Thursday from the Finance Sector Union. The nation’s largest lender had said that the voice bot reduced call volumes by 2,000 a week, when union members said volumes were in fact rising and CBA had to offer staff overtime and direct team leaders to answer calls, the union said.

🎯 Final Takeaways

These discussions reveal how developers think about emerging AI trends, tool usage, and practical innovation. Take inspiration from these community insights to level up your own development or prompt workflows.