What if you could write a document once and your AI could access it forever? Not summarized, not compressed, not approximated. The actual document, word for word, every time you ask.
Not a fading memory. Not a lossy summary stored in a 1,500-word scratchpad. The real thing, unchanged, available on demand in any conversation, on any day, for as long as you want it there.
That's not a hypothetical. It's what the Model Context Protocol makes possible today.
The persistence problem
Every AI assistant has some version of the same limitation: knowledge doesn't stick.
ChatGPT's memory holds roughly 1,500 words across your entire account. Claude compacts earlier conversation turns into summaries when the context grows too long. Both systems reset completely when you start a new chat. The 45 minutes you spent explaining your project architecture on Monday? Gone by Tuesday. You start over.
The industry response has been to make context windows bigger. GPT-4o has 128K tokens. GPT-5 has 400K. Gemini 2.5 Pro has 1M. The assumption is that more room means better retention. But a context window is RAM, not a hard drive. It processes information for a single session, then clears. A million-token window that resets to zero is still zero when you open a new conversation.
The real fix isn't a bigger window. It's moving the knowledge outside the window entirely.
How persistent documents work
The concept is straightforward: you write a document. That document lives outside the AI. When the AI needs the information, it reads the document. When the information changes, the document gets updated. The AI never needs to "remember" anything because the knowledge exists independently.
The Model Context Protocol (MCP) is what makes this practical. MCP is an open standard, launched by Anthropic in November 2024 and now adopted by OpenAI, Google, and Microsoft. It gives AI assistants a standardized way to connect to external tools and data sources. Think of it as a USB port for AI: plug in a service, and the AI can read from it and write to it.
Here's how it works with Unmarkdown:
- You write a document in Unmarkdown.
- You connect Unmarkdown to Claude via MCP.
- Claude can now read, update, and publish that document in any conversation, at any time.
No copy-pasting. No re-uploading. No re-explaining. The document is always there, always current, always complete.
What makes this different from uploading files
Claude Projects and ChatGPT Projects both let you upload files for context. That sounds similar, but the mechanics are fundamentally different.
Projects summarize large files to fit context. MCP reads documents on demand, in full. When you upload a 20-page product spec to a Claude Project, the system processes it to fit within the context window. That means summarizing, truncating, or selectively pulling sections. With MCP, Claude reads the full document exactly as you wrote it. Nothing is compressed or approximated.
Projects are siloed per project. MCP documents work across all conversations. Your product roadmap in the "Strategy" project isn't accessible from the "Engineering" project. With MCP, every conversation has access to every document. Ask about your roadmap during an engineering discussion, and the AI reads it directly.
Project files are static uploads. MCP documents can be updated by the AI itself. When you upload a file to a project, it's a snapshot. If the information changes, you have to re-upload. With MCP, Claude can update the document during the conversation. After a meeting, you can say "add today's decisions to the meeting notes," and the document is updated for every future conversation.
Project context degrades over long conversations. MCP documents are fresh every read. As a conversation grows, earlier context (including uploaded file content) gets compacted or pushed out. MCP documents aren't subject to conversation length. Claude reads the document each time you reference it, so the information is always complete, regardless of how long the conversation has been going.
What to put in your persistent documents
The highest-value documents are the ones you re-explain most often. If you find yourself typing the same background into every new chat, that's a signal the information should be in a document.
Company context. Your company name, what it does, your role, your team structure, key metrics. This is the background you provide at the start of every serious conversation. Write it once, and every future conversation starts with full context.
Project status. Current priorities, active workstreams, blockers, recent decisions. This is the information that changes most frequently and costs the most to re-explain. A project status document that gets updated weekly means your AI always knows what's in progress.
Style guides. Your writing tone, formatting preferences, terminology standards. If you always have to tell the AI "don't use em dashes" or "we call it a workspace, not a dashboard," put it in a document. The AI will read your style guide and follow it without being reminded.
Meeting notes. After each meeting, tell Claude to update the meeting notes document with the decisions, action items, and open questions. The next time you or anyone on your team asks "what did we decide about the pricing change?", the AI reads the document and gives the answer.
Personal preferences. Communication style, workflow habits, tools you use, things you care about. The details that make an AI assistant feel like it actually knows you, rather than giving generic responses.
The feedback loop
Here's where persistent documents become more than just storage: they create a compounding feedback loop.
Claude reads your product roadmap and helps you prioritize Q2. You tell Claude to update the roadmap with the decisions. Next week, Claude reads the updated roadmap and helps you plan sprint work based on the priorities you already set. The context builds on itself.
Meeting notes accumulate. Project status evolves. Your style guide gets refined as you discover new preferences. Every update makes every future conversation more useful because the AI has access to a richer, more current knowledge base.
This is what "context engineering" actually looks like in practice. Not a one-time setup, but an ongoing process where your knowledge base grows with your work. The AI doesn't need a better memory. It needs access to yours, and the ability to help you keep it current.
Beyond persistence: real documents, not just AI context
Documents in Unmarkdown aren't just context for AI conversations. They're real documents with real utility.
You can publish any document as a clean, styled web page. Share a project update with stakeholders via a URL. Publish your style guide for your team. Create a living document that's both your AI's knowledge base and a resource for the people you work with.
You can style documents with any of 62 templates, from minimalist to corporate to academic. When a document needs to leave the AI context and enter the real world, it's already formatted and ready.
You can copy documents to Google Docs, Word, Slack, OneNote, or Email. The same document that Claude reads as markdown can be copied with full formatting into any destination your team uses.
This is the difference between a knowledge base and a notes app. Your persistent documents aren't locked in an AI tool. They're versatile documents that happen to also be the best way to give your AI permanent context.
The seven tools
When you connect Unmarkdown to Claude via MCP, Claude gets seven tools for working with your documents:
- create_document: Add new documents to your knowledge base, optionally placing them directly in a folder
- list_documents: See everything in your library, or filter by folder
- get_document: Read the full content of any document
- update_document: Keep documents current with new information, or move them between folders
- publish_document: Share a document as a public or link-only web page
- convert_markdown: Format content for Google Docs, Word, Slack, or any other destination
- get_usage: Check your API consumption for the billing period
You can reference folders by name. Say "create a meeting notes doc in my Team folder" or "move the Q4 report to Archive" and Claude handles the organization. You don't need to memorize the tools. Claude discovers them automatically when the MCP connection is active. Just describe what you want in natural language, and Claude uses the right tool.
Setting it up
Connecting takes about two minutes. There are three methods depending on which Claude client you use:
Claude on the web (claude.ai): Go to Settings, then Integrations, and add Unmarkdown. This uses OAuth, so you authenticate once and the connection persists.
Claude Desktop: Add a configuration entry to your Claude Desktop config file. This requires an API key from your Unmarkdown account.
Claude Code (terminal): Run the claude mcp add command with the Unmarkdown server URL. One command and you're connected.
The Claude integration guide has the exact configuration for each method, including config file paths and JSON format.
What this means
The AI industry has been chasing bigger context windows as the solution to the persistence problem. But context windows are session-scoped by design. They will always reset. The correct architecture for persistent knowledge is to store it outside the session, in documents that any conversation can access.
The people who build this habit now, writing their key context into documents and connecting those documents to their AI tools, will have a compounding advantage. Every document they add makes every future conversation smarter. Every update is immediately available everywhere. The knowledge accumulates instead of evaporating.
Your AI doesn't need a better memory. It needs access to documents that never forget.
