What’s your method for persistent memory in ChatGPT? (Prompt systems compared)

07 Aug 2025

🧠 This Week’s Top Reddit Prompts & Discussions

Each week, we dive into Reddit’s most thoughtful discussions on AI and prompt engineering. Whether it's real-world techniques, ethical questions, or prompt inspiration, here’s what the AI community is talking about.

1. What’s your method for persistent memory in ChatGPT? (Prompt systems compared)

👤 u/Upstairs_Deer457  |  1 upvotes  |  2025-08-07  |  🏷️ General Discussion

🔗 View Original Post on Reddit

💡 Summary

I’ve been experimenting with ways to keep long-term or cross-session memory in ChatGPT and other LLMs, using only prompt engineering. There are two main approaches I’ve seen and used:

1. Command Prompt Method:

Super simple, works for most people who just want to save a fact or two:

/P-Mem_ADD [TEXT], [TAG]: Adds [TEXT] to persistent memory, labeled [TAG].
/Lt-Chat-Mem_ADD [TEXT], [TAG]: Adds [TEXT] to session memory, labeled [TAG].
/P-Mem_FORGET [TAG]: Overwrites persistent memory for [TAG].
/Lt-Chat-Mem_FORGET [TAG]: Removes [TAG] from session memory.
/P-Mem_LOAD [TAG]: Loads [TAG] into chat as a JSON object.

2. Framework Method (White Save Suite):

I ended up building something more structured for myself, since I wanted multi-slot context, summaries, and backup. Here’s a comparison:

White Save SuiteCommand Memory Manager
Power⭐⭐⭐⭐⭐ (Framework + slots)⭐⭐ (Quick facts)
Ease of Use⭐⭐⭐ (Setup needed)⭐⭐⭐⭐ (Instant-on)
FeaturesSlots, backups, audit, metaAdd/remove/load only
ScalabilityHighGets messy, fast
Data IntegrityRobust (summaries/backups)Manual, error-prone
CustomizationExtremeMinimal

If anyone wants the full framework prompt or wants to compare setups, let me know in the comments and I’ll share. Really curious what the rest of this sub uses. I'm always down to swap ideas.

📝 Key Insight: This post offers a unique perspective on how AI is affecting real jobs and creativity.

2. How are you managing evolving and redundant context in dynamic LLM-based systems?

👤 u/steamed_specs  |  1 upvotes  |  2025-08-07  |  🏷️ Quick Question

🔗 View Original Post on Reddit

💡 Summary

I’m working on a system that extracts context from dynamic sources like news headlines, emails, and other textual inputs using LLMs. The goal is to maintain a contextual memory that evolves over time — but that’s proving more complex than expected.

Some of the challenges I’m facing:

  • Redundancy: Over time, similar or duplicate context gets extracted, which bloats the system.
  • Obsolescence: Some context becomes outdated (e.g., “X is the CEO” changes when leadership changes).
  • Conflict resolution: New context can contradict or update older context — how to reconcile this automatically?
  • Storage & retrieval: How to store context in a way that supports efficient lookups, updates, and versioning?
  • Granularity: At what level should context be chunked — full sentences, facts, entities, etc.?
  • Temporal context: Some facts only apply during certain time windows — how do you handle time-aware context updates?

Currently, I’m using LLMs (like GPT-4) to extract and summarize context chunks, and I’m considering using vector databases or knowledge graphs to manage it. But I haven’t landed on a robust architecture yet.

Curious if anyone here has built something similar. How are you managing:

  • Updating historical context without manual intervention?
  • Merging or pruning redundant or stale information?
  • Scaling this over time and across sources?

Would love to hear how others are thinking about or solving this problem.

📝 Key Insight: This post offers a unique perspective on how AI is affecting real jobs and creativity.

📌 Final Takeaway

From practical prompts to philosophical debates, Reddit continues to be a space where the future of AI is shaped through community conversations. Stay tuned each week for more insights curated from the minds of builders, thinkers, and everyday users in the world of AI.