Unmarkdown
AI Tools

Claude Projects vs ChatGPT Projects vs External Knowledge Bases

Updated Feb 24, 2026 · 9 min read

You want your AI to know things. Your company's product roadmap. Your team's writing style. The technical constraints that make half of its suggestions irrelevant. You want to stop explaining the same background in every conversation.

Three approaches have emerged for solving this problem: ChatGPT Projects, Claude Projects, and external knowledge bases connected via MCP. Each makes different tradeoffs around capacity, persistence, and flexibility. This guide compares them head-to-head so you can pick the right approach for how you actually work.

The core problem

Every AI conversation starts from zero. The context window holds information for the duration of a session, but when you close the tab or start a new chat, that context disappears. It does not matter whether the window is 128K tokens or 1 million tokens. The behavior is identical: new conversation, blank slate.

Both OpenAI and Anthropic have built features to address this. Projects let you upload reference documents. Memory systems try to retain facts across sessions. But these features have real limits, and understanding those limits is the key to choosing the right approach.

ChatGPT Projects

OpenAI launched Projects as a way to organize conversations around specific topics and attach reference files. Free users got access on September 3, 2025.

File limits. Free accounts can attach 5 files per project. Plus subscribers ($20/month) get 25 files. Pro subscribers ($200/month) get 40 files. Each file can be up to 512MB, roughly 2 million tokens per file. These are generous per-file sizes, but the number of files is the real constraint. If your project context spans more than a handful of documents, you will hit the ceiling quickly on the free tier.

Custom instructions. Each project gets its own custom instructions block, which is prepended to every conversation in that project. This is useful for setting tone, terminology, and behavioral rules specific to that domain.

Context drift. Long conversations are where ChatGPT Projects start to struggle. As a conversation extends past roughly 30 messages, earlier context begins to fade. The AI may contradict something you established at the beginning of the conversation or ignore constraints you already explained. The practical workaround is to keep conversations focused and start new ones frequently, but that means re-establishing context each time.

Siloed by design. Each project is a separate container. Your product roadmap in the "Product Strategy" project is invisible to conversations in the "Engineering" project. If you need information to cross project boundaries, you must duplicate files or switch projects entirely.

Memory. ChatGPT's Saved Memories system (separate from Projects) stores roughly 1,500 to 1,750 words of remembered facts, shared across all your conversations and projects. The "Reference Chat History" feature, introduced in April 2025, can pull loosely from past conversations. But the memory system has proven unreliable. OpenAI experienced at least two memory wipes in 2025, where users lost months of accumulated context overnight with no way to recover it.

Strengths: Easy setup, integrated directly into the ChatGPT interface, generous per-file sizes, custom instructions per project.

Weaknesses: Low file count limits on free and Plus tiers, context drift in long conversations, projects are siloed from each other, memory system is small and has proven unreliable.

Claude Projects

Anthropic's Projects feature takes a similar approach but handles large document sets differently.

File handling. Each file can be up to 30MB. For small projects, Claude loads uploaded files in full, giving the AI complete access to every word. For larger projects, Claude automatically switches to RAG mode (retrieval-augmented generation), using semantic search to pull the most relevant sections from your documents. This RAG mode provides roughly 10x the effective capacity compared to loading everything into context, making it practical to upload significantly more material.

Project-scoped memory. Since September and October 2025, Claude Projects have had their own memory space. Each project maintains a "Memory summary" that Claude auto-synthesizes and updates roughly every 24 hours. The summary is organized into categories like "Role and Work," "Current Projects," and "Personal Content." You can view and edit the memory directly.

Compacting. When a conversation grows beyond what fits in Claude's context window, earlier portions are automatically compacted. This triggers at approximately 83.5% of context usage and achieves roughly 85% payload reduction. The trade-off is the same as with any summarization: specific details, exact numbers, and precise wording can be lost in the compression.

Strengths: Better handling of large document sets via automatic RAG, project-scoped memory (not just global), higher effective capacity for reference material, memory is user-editable.

Weaknesses: Still siloed per project, compacting loses detail in long conversations, project memory is relatively new and continues to evolve.

External knowledge bases via MCP

The third approach moves your knowledge outside any AI platform entirely.

How it works. Your documents live in an external service (like Unmarkdown™). The AI connects to that service via the Model Context Protocol (MCP) and reads your documents on demand. Instead of uploading files to a project, the AI reaches out to your document library whenever it needs information.

No context window limits. Your knowledge base is not constrained by the AI's context window. The AI reads specific documents when it needs them, rather than trying to hold everything at once. A 100-document knowledge base works just as well as a 5-document one, because the AI only loads what is relevant to the current question.

No summarization. When the AI reads a document through MCP, it gets the full content exactly as you wrote it. There is no compression, no truncation, and no selective extraction. If your product roadmap is 3,000 words, the AI reads all 3,000 words.

Works across sessions. Start a new conversation tomorrow, and the AI still has full access to every document. The knowledge persists because it was never stored in the conversation to begin with. It lives in your document library, independent of any chat session.

Not siloed. The same knowledge base serves every conversation, regardless of topic. Your product roadmap, style guide, and technical architecture are all accessible from any conversation. You do not need to organize knowledge into separate projects or duplicate files across containers.

Multi-purpose documents. Documents stored in an external knowledge base are not limited to AI reference. With Unmarkdown™, the same documents can be published as web pages, formatted for Google Docs or Word, converted for Slack, or shared with your team. Your knowledge base doubles as a document publishing platform.

Weaknesses: Requires an MCP-compatible client. Claude supports MCP natively (via claude.ai, Claude Desktop, and Claude Code). OpenAI adopted MCP in March 2025, Google DeepMind in April 2025, and Microsoft in May 2025, so the ecosystem is expanding. Setup is slightly more involved than uploading a file to a project, though the integration guide walks through it in minutes.

Comparison table

FeatureChatGPT ProjectsClaude ProjectsExternal KB (MCP)
File/document limits5 (Free), 25 (Plus), 40 (Pro)30MB per file, RAG for large setsUnlimited
Cross-session persistenceFiles persist, conversations do notFiles persist, conversations do notFull persistence, all sessions
Cross-project accessNo, siloed per projectNo, siloed per projectYes, one knowledge base for all
Context driftYes, after ~30 messagesYes, compacting at ~83.5% usageNo, reads fresh on demand
Memory system~1,750 words global, unreliableProject-scoped, auto-synthesizedNot needed, documents are the memory
Document fidelityProcessed to fit contextRAG may select sectionsFull document, no summarization
AI can update docsNoNoYes, via MCP tools
Multi-platform useChatGPT onlyClaude onlyAny MCP-compatible client
Setup complexityLow (upload files)Low (upload files)Medium (connect MCP server)
Documents usable outside AINo (platform-locked)No (platform-locked)Yes (publish, format, share)

When to use each

ChatGPT Projects work best for casual use with a small number of reference documents. If you have 5 or fewer files and primarily use ChatGPT, the built-in Projects feature is the simplest path. Keep conversations short (under 30 messages) and start new ones frequently to avoid context drift.

Claude Projects are the better choice when you have a moderate volume of reference material and want the AI to search through it intelligently. The automatic RAG mode handles larger document sets well, and project-scoped memory is a meaningful upgrade over ChatGPT's global memory. If you work in Claude and your documents fit within a single project scope, this is a solid option.

External knowledge bases are the right approach when your context needs are serious. If you have more than a handful of documents, if you need the same knowledge accessible across different conversations and projects, if you want the AI to update your documents (not just read them), or if your documents serve purposes beyond AI reference, an external knowledge base solves problems that platform-native Projects cannot.

The three approaches are not mutually exclusive. You can use Claude Projects for quick, project-specific work while maintaining an external knowledge base for your core organizational knowledge. The external knowledge base becomes the single source of truth, and Projects become lightweight workspaces for specific tasks.

Getting started with an external knowledge base

If you want to try the external knowledge base approach:

  1. Create a few documents in Unmarkdown™ covering the context you re-explain most often: company overview, current projects, style preferences, technical constraints.
  2. Connect to Claude via MCP using the integration guide. On claude.ai, this takes about two minutes through Settings and Integrations.
  3. In your next conversation, ask Claude to read one of your documents and answer a question based on it.

The difference between an AI that starts from scratch and one that already has your full context is immediately obvious. For the full setup walkthrough, including configuration for Claude Desktop and Claude Code, see the integrations overview.

Your markdown deserves a beautiful home.

Start publishing for free. Upgrade when you need more.

View pricing