Unmarkdown
Developers

How I Built an MCP Server from Scratch: A Step-by-Step Guide

Updated Mar 20, 2026 · 17 min read

I built an MCP server for Unmarkdown™ that lets Claude create, convert, and publish documents directly from a conversation. Here's exactly how I did it.

Not a toy example. Not a "hello world" demo. The actual production server that ships as an npm package, handles authentication, wraps a real API, and serves users on Claude Desktop, Claude Code, and claude.ai. I'll walk through every decision, show the real code patterns, and be honest about what was harder than expected.

If you want the conceptual overview of what MCP is, read that first. This post assumes you know the basics and want to build something real.

Why build an MCP server

MCP (Model Context Protocol) is Anthropic's open standard for connecting AI models to external tools and data. When you build an MCP server, you're giving Claude (or any MCP-compatible client) the ability to call functions in your system. Not through prompts or copy-paste. Through a structured protocol where Claude discovers your tools, understands their parameters, and calls them with validated arguments.

For a product like Unmarkdown™, this means Claude can create a document, apply a template, convert it for Slack, and publish it with a shareable URL, all within a single conversation. The user never leaves the chat. The MCP server handles the plumbing.

The business case is straightforward: if your product has an API, an MCP server makes it AI-native. Users who work inside Claude can reach your product without switching context. That's distribution you can't buy.

MCP server project setup

Start with a TypeScript project. The MCP SDK is available in 10 languages, but TypeScript has the most mature SDK and the largest community. If you're wrapping a web API, it's the natural choice.

mkdir my-mcp-server
cd my-mcp-server
npm init -y

Install the core dependencies:

npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node

Two runtime dependencies. That's it. The MCP SDK (@modelcontextprotocol/sdk) handles all protocol negotiation, tool registration, and transport. Zod handles input validation, and the SDK uses it natively for defining tool parameter schemas.

For the Unmarkdown™ server, I also added tsx as a dev dependency for faster iteration during development. It lets you run TypeScript directly without a compile step:

npm install -D tsx

Here's the tsconfig.json:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./build",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "declaration": true
  },
  "include": ["src/**/*"]
}

And the package.json fields that matter:

{
  "name": "@your-org/mcp-server",
  "version": "1.0.0",
  "type": "module",
  "bin": {
    "your-mcp-server": "./build/index.js"
  },
  "scripts": {
    "build": "tsc",
    "dev": "tsx --watch src/index.ts"
  },
  "engines": {
    "node": ">=18.0.0"
  }
}

The bin field is important. It's what lets users run your server with npx. The type: "module" enables ES module syntax throughout the project.

Defining tools

Tools are the heart of an MCP server. Each tool is a function that Claude can discover, understand, and call. You define the name, a description, a parameter schema, and a handler function.

Here's how I structured the Unmarkdown™ server's tools. The key insight: organize your code into three files, not one. The entry point (index.ts) creates the server instance. A separate file (tools.ts) registers all tools. A third file (api-client.ts) handles HTTP communication with your backend. This separation matters when you have more than two or three tools.

The API client

Before defining tools, build a thin client that wraps your API. Every tool handler will use it:

// src/api-client.ts

export class ApiError extends Error {
  constructor(
    public status: number,
    public code: string,
    message: string
  ) {
    super(message);
    this.name = "ApiError";
  }
}

export class MyApiClient {
  private baseUrl: string;
  private apiKey: string;

  constructor(apiKey: string, baseUrl?: string) {
    this.apiKey = apiKey;
    this.baseUrl = (baseUrl ?? "https://api.yourservice.com")
      .replace(/\/+$/, "");
  }

  async request<T>(
    method: string,
    path: string,
    body?: Record<string, unknown>,
    query?: Record<string, string>
  ): Promise<T> {
    let url = `${this.baseUrl}${path}`;
    if (query) {
      url += `?${new URLSearchParams(query).toString()}`;
    }

    const headers: Record<string, string> = {
      Authorization: `Bearer ${this.apiKey}`,
      "User-Agent": "my-mcp-server/1.0",
    };

    if (body) {
      headers["Content-Type"] = "application/json";
    }

    const res = await fetch(url, {
      method,
      headers,
      body: body ? JSON.stringify(body) : undefined,
    });

    const data = await res.json();

    if (!res.ok) {
      throw new ApiError(
        res.status,
        data.error?.code ?? "unknown",
        data.error?.message ?? `API returned ${res.status}`
      );
    }

    return data as T;
  }
}

This pattern, a typed error class plus a generic request method, gives you clean error handling across all your tools. The ApiError class carries the HTTP status and error code so your tool handlers can return structured error messages to Claude.

Tool definitions

Here's what real tool definitions look like. I'll walk through four tools from the Unmarkdown™ server, simplified but structurally identical to the production code.

convert_markdown: The most-used tool. Takes markdown and a destination, returns formatted output.

// src/tools.ts
import { z } from "zod";
import type { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { ApiError, type MyApiClient } from "./api-client.js";

function errorResult(err: unknown) {
  if (err instanceof ApiError) {
    return {
      content: [{
        type: "text" as const,
        text: `Error ${err.status} (${err.code}): ${err.message}`,
      }],
      isError: true,
    };
  }
  const message = err instanceof Error ? err.message : String(err);
  return {
    content: [{ type: "text" as const, text: `Error: ${message}` }],
    isError: true,
  };
}

export function registerTools(
  server: McpServer,
  client: MyApiClient
) {
  server.tool(
    "convert_markdown",
    "Convert markdown to destination-specific formatted output. " +
    "Returns JSON with 'html' and 'plain_text' fields.",
    {
      markdown: z.string().describe("Markdown content to convert"),
      destination: z
        .enum(["google-docs", "word", "slack", "email", "plain-text"])
        .optional()
        .describe('Target format (default: "generic")'),
      template_id: z
        .string()
        .optional()
        .describe('Visual template ID (default: "swiss")'),
    },
    async ({ markdown, destination, template_id }) => {
      try {
        const body: Record<string, unknown> = { markdown };
        if (destination) body.destination = destination;
        if (template_id) body.template_id = template_id;
        const result = await client.request("POST", "/v1/convert", body);
        return {
          content: [{
            type: "text" as const,
            text: JSON.stringify(result, null, 2),
          }],
        };
      } catch (err) {
        return errorResult(err);
      }
    }
  );
}

A few things to notice.

The description does real work. Claude reads tool descriptions to decide which tool to call and how. "Convert markdown" is too vague. "Convert markdown to destination-specific formatted output. Returns JSON with 'html' and 'plain_text' fields" tells Claude exactly what to expect. I spent more time writing tool descriptions than writing the handler logic. That's not an exaggeration. A bad description means Claude calls the wrong tool or passes wrong arguments, and no amount of handler code fixes that.

Zod .describe() on every parameter. Each parameter gets a human-readable description that Claude uses to understand what value to pass. The .optional() and .enum() constraints help too. Claude respects enums and won't invent values outside the allowed set.

Error handling returns, not throws. MCP tool handlers should catch errors and return them as isError: true results. If you throw an exception, the SDK catches it, but the error message is less clean. The errorResult helper centralizes this pattern.

Here's create_document, which demonstrates a write operation:

server.tool(
  "create_document",
  "Create a new markdown document in the system",
  {
    title: z.string().optional().describe("Document title"),
    content: z.string().optional()
      .describe("Markdown content (default: empty)"),
    folder: z.string().optional()
      .describe("Folder name or folder ID to place the document in"),
    template_id: z.string().optional()
      .describe('Visual template ID (default: "swiss")'),
  },
  async ({ title, content, folder, template_id }) => {
    try {
      const body: Record<string, unknown> = {};
      if (title) body.title = title;
      if (content) body.content = content;
      if (folder) body.folder = folder;
      if (template_id) body.template_id = template_id;
      const result = await client.request("POST", "/v1/documents", body);
      return {
        content: [{
          type: "text" as const,
          text: JSON.stringify(result, null, 2),
        }],
      };
    } catch (err) {
      return errorResult(err);
    }
  }
);

And publish_document, which takes a document ID and makes it publicly accessible:

server.tool(
  "publish_document",
  "Publish a document to a shareable web page. " +
  "Default visibility is 'link' (unlisted, anyone with the URL can view).",
  {
    id: z.string().describe("Document UUID"),
    slug: z.string().optional()
      .describe("Custom URL slug (auto-generated if omitted)"),
    visibility: z.enum(["public", "link"]).optional()
      .describe('"public" or "link" (default, unlisted)'),
  },
  async ({ id, slug, visibility }) => {
    try {
      const body: Record<string, unknown> = {};
      if (slug) body.slug = slug;
      if (visibility) body.visibility = visibility;
      const result = await client.request(
        "POST",
        `/v1/documents/${encodeURIComponent(id)}/publish`,
        body
      );
      return {
        content: [{
          type: "text" as const,
          text: JSON.stringify(result, null, 2),
        }],
      };
    } catch (err) {
      return errorResult(err);
    }
  }
);

And list_documents with pagination support:

server.tool(
  "list_documents",
  "List your saved documents with pagination. " +
  "Optionally filter by folder name or ID.",
  {
    folder: z.string().optional()
      .describe("Filter by folder name or folder ID"),
    limit: z.number().int().min(1).max(100).optional()
      .describe("Max results per page (default: 20, max: 100)"),
    cursor: z.string().optional()
      .describe("Pagination cursor from a previous response"),
  },
  async ({ folder, limit, cursor }) => {
    try {
      const query: Record<string, string> = {};
      if (folder) query.folder = folder;
      if (limit) query.limit = String(limit);
      if (cursor) query.cursor = cursor;
      const result = await client.request(
        "GET", "/v1/documents", undefined, query
      );
      return {
        content: [{
          type: "text" as const,
          text: JSON.stringify(result, null, 2),
        }],
      };
    } catch (err) {
      return errorResult(err);
    }
  }
);

The pattern is consistent across all tools: validate inputs with Zod, call the API client, return JSON results or structured errors. Once you have this pattern down, adding new tools is mechanical.

Tool annotations

The MCP SDK supports tool annotations that hint at a tool's behavior. These are optional but worth adding:

server.tool(
  "convert_markdown",
  "Convert markdown to formatted output",
  { /* zod schema */ },
  {
    title: "Convert Markdown",
    readOnlyHint: true,
    destructiveHint: false,
    idempotentHint: true,
    openWorldHint: true,
  },
  async (params) => { /* handler */ }
);

readOnlyHint tells the client this tool doesn't modify any state. destructiveHint flags tools that delete data. idempotentHint indicates the tool can be called multiple times with the same result. Clients can use these hints to decide whether to auto-approve tool calls or prompt the user for confirmation.

Server instructions

One feature I underutilized initially: server-level instructions. The McpServer constructor accepts an instructions string that the client presents to the AI model alongside the tool definitions. This is where you put workflow guidance that spans multiple tools.

const server = new McpServer(
  { name: "my-server", version: "1.0.0" },
  {
    instructions: `RECOMMENDED WORKFLOWS:
- "Create and share a document": create_document -> publish_document -> give user the URL
- "Convert for Slack": convert_markdown with destination "slack" -> show the plain_text field
- "List my documents": list_documents -> present as a clean list`
  }
);

For the Unmarkdown™ server, the instructions string is substantial. It explains which destinations need the browser copy button (Google Docs, Word, OneNote) versus which work directly from the API response (Slack, Plain Text). It documents Chart.js syntax. It describes folder semantics. Without these instructions, Claude would try to paste raw HTML into Google Docs and wonder why the formatting breaks.

Think of server instructions as a system prompt for your tools. The tool descriptions say what each tool does. The instructions say how to combine them.

Defining resources

Resources are read-only data that Claude can access without a tool call. They're less common than tools but useful for exposing reference data, configuration, or document content.

server.resource(
  "config",
  "server://config",
  async (uri) => ({
    contents: [{
      uri: uri.href,
      text: JSON.stringify({
        supportedDestinations: [
          "google-docs", "word", "slack",
          "onenote", "email", "plain-text"
        ],
        maxDocumentSize: "5MB",
        availableTemplates: 62,
      }),
      mimeType: "application/json",
    }],
  })
);

Resources are identified by URIs. The client can list available resources and read them by URI. For most API-wrapping MCP servers, tools are sufficient. Resources shine when you have static reference data that Claude should be able to consult without triggering an API call.

Server transport and entry point

The entry point wires everything together. Here's the pattern I settled on:

// src/index.ts
#!/usr/bin/env node

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport }
  from "@modelcontextprotocol/sdk/server/stdio.js";
import { MyApiClient } from "./api-client.js";
import { registerTools } from "./tools.js";

const apiKey = process.env.MY_API_KEY;
if (!apiKey) {
  process.stderr.write(
    "Error: MY_API_KEY environment variable is required.\n" +
    "Get your API key at https://yourservice.com/account/api\n"
  );
  process.exit(1);
}

const client = new MyApiClient(apiKey);
const server = new McpServer(
  { name: "my-server", version: "1.0.0" },
  { instructions: "..." }
);

registerTools(server, client);

const transport = new StdioServerTransport();
server.connect(transport).then(() => {
  process.stderr.write("MCP server running on stdio\n");
}).catch((err) => {
  process.stderr.write(`Fatal: ${err}\n`);
  process.exit(1);
});

Critical detail: all logging goes to stderr, never stdout. The stdio transport uses stdout for MCP protocol messages. If you write a console.log() in your handler, it corrupts the protocol stream and the connection dies silently. I lost two hours to this the first time. Use process.stderr.write() or console.error() for any logging.

The shebang line (#!/usr/bin/env node) at the top makes the compiled file executable when installed globally or via npx.

Authentication

For local MCP servers (stdio transport), the standard pattern is environment variables passed through the MCP client configuration:

{
  "mcpServers": {
    "my-server": {
      "command": "npx",
      "args": ["-y", "@your-org/mcp-server"],
      "env": {
        "MY_API_KEY": "your_api_key_here"
      }
    }
  }
}

The server reads process.env.MY_API_KEY at startup and passes it to the API client. Simple, secure (the key never appears in protocol messages), and compatible with every MCP client.

For the Unmarkdown™ server, I also built a remote HTTP endpoint with OAuth 2.1 authentication. This is substantially more complex. The remote transport uses Streamable HTTP instead of stdio, and the server needs to handle token exchange, session management, and multi-user isolation. If you're building your first MCP server, stick with stdio and API keys. Add the remote endpoint later when you need it.

One pattern I found valuable: validate the API key at startup, not on first use. If the key is missing or invalid, fail immediately with a clear error message. Users debugging MCP connection issues don't need the extra confusion of a server that starts successfully but fails on the first tool call.

const apiKey = process.env.MY_API_KEY;
if (!apiKey) {
  process.stderr.write(
    "Error: MY_API_KEY environment variable is required.\n"
  );
  process.exit(1);
}

Testing your MCP server

This was the hardest part of the whole process. Debugging MCP servers is genuinely harder than debugging regular APIs, because there's no browser DevTools, no curl equivalent, and no request/response log by default. The protocol runs over stdio, so you can't just hit an endpoint and see what comes back.

Three approaches that work:

1. MCP Inspector

The official debugging tool. It launches a browser-based UI where you can see your tools, fill in parameters, and call them interactively.

npx @modelcontextprotocol/inspector node build/index.js

This opens at http://127.0.0.1:6274. You'll see all your registered tools with their parameter schemas. Click any tool, fill in the fields, hit "Call Tool," and see the result. It's the closest thing to Postman for MCP servers.

The Inspector is essential for verifying that your Zod schemas produce the right JSON Schema, your descriptions render correctly, and your handlers return properly formatted results. I use it every time I add a new tool.

2. Claude Code directly

Once your server builds, point Claude Code at it by adding an .mcp.json file to your project root:

{
  "mcpServers": {
    "my-server": {
      "command": "node",
      "args": ["./build/index.js"],
      "env": {
        "MY_API_KEY": "your_test_key"
      }
    }
  }
}

Then start a Claude Code session and ask it to use your tools. This is the real integration test. You'll quickly find out if your descriptions are clear enough for Claude to use the tools correctly, if your error messages are helpful, and if the tool output format makes sense in a conversation.

3. Stderr logging

When something goes wrong, add process.stderr.write() calls in your handlers. The Inspector and Claude Code both capture stderr output:

async ({ markdown, destination }) => {
  process.stderr.write(
    `convert_markdown called: destination=${destination}\n`
  );
  try {
    const result = await client.request("POST", "/v1/convert", {
      markdown,
      destination,
    });
    process.stderr.write(
      `convert_markdown success: ${JSON.stringify(result).length} bytes\n`
    );
    return { content: [{ type: "text" as const, text: JSON.stringify(result) }] };
  } catch (err) {
    process.stderr.write(`convert_markdown error: ${err}\n`);
    return errorResult(err);
  }
}

This is primitive, but it works. The MCP ecosystem doesn't have the equivalent of structured logging or request tracing yet. Stderr is what you've got. Use it liberally during development.

Deployment and distribution

You have three options for getting your MCP server to users.

Publish to npm. Users install and run with a single command:

{
  "mcpServers": {
    "your-server": {
      "command": "npx",
      "args": ["-y", "@your-org/mcp-server"],
      "env": {
        "MY_API_KEY": "user_key_here"
      }
    }
  }
}

This is how the Unmarkdown™ MCP server is distributed. npx -y @un-markdown/mcp-server installs and runs it in one step. The bin field in package.json makes this work automatically.

Before publishing, make sure your package.json has:

  • The bin field pointing to your compiled entry file
  • A files array that includes your build directory
  • The shebang (#!/usr/bin/env node) at the top of your entry file

Claude Desktop config (for local/private servers)

Point directly to the compiled JavaScript file:

{
  "mcpServers": {
    "my-server": {
      "command": "node",
      "args": ["/absolute/path/to/build/index.js"],
      "env": {
        "MY_API_KEY": "your_key"
      }
    }
  }
}

This works for personal tools or internal team servers that don't need public distribution.

Remote HTTP transport (for cloud deployment)

For servers that need to be accessible over the network (multi-user, no local install), the MCP spec defines Streamable HTTP transport. The Unmarkdown™ server runs a remote endpoint at https://unmarkdown.com/api/mcp that claude.ai users can connect to directly without installing anything.

Remote transport is significantly more complex: you need HTTP session handling, authentication (OAuth 2.1 or API keys via headers), and proper CORS configuration. I'd recommend getting your server working locally with stdio first, then adding remote transport as a second phase.

Lessons from production

After running the Unmarkdown™ MCP server in production for several months, here's what I learned that the documentation doesn't cover.

Tool descriptions are your most important code. I've rewritten tool descriptions five or six times each. Every time Claude misuses a tool, the fix is almost always a better description, not a code change. Be specific about what the tool returns, what side effects it has, and when NOT to use it.

Server instructions prevent multi-tool confusion. When you have 7 tools, Claude sometimes picks the wrong one or calls them in the wrong order. The server-level instructions field is where you define workflows: "To share a document, first call create_document, then call publish_document." Without these, Claude will try to publish a document that doesn't exist yet.

Error messages should tell Claude what to do next. "Error 404" is useless. "Error 404: Document not found. The document may have been deleted. Call list_documents to see available documents." gives Claude a recovery path. I added action suggestions to every error message, and the user experience improved noticeably.

Validate at startup, not at call time. If your API key is missing or your base URL is wrong, fail when the server starts. Don't wait until the user asks Claude to do something and then return a cryptic authentication error three tool calls deep.

Keep your tool count manageable. The Unmarkdown™ server has 7 tools. That feels like the right ceiling for a single-domain server. Each additional tool increases the chance that Claude picks the wrong one. If you're wrapping an API with 30 endpoints, consider which 5 to 10 are most useful in a conversation and expose only those.

The full picture

Here's what the directory structure looks like for a production MCP server:

my-mcp-server/
  src/
    index.ts          # Entry point, server creation, transport
    tools.ts          # Tool definitions and handlers
    api-client.ts     # API client with typed errors
  build/              # Compiled output
  package.json
  tsconfig.json

Three source files. Two runtime dependencies (@modelcontextprotocol/sdk and zod). One build step (tsc). The MCP SDK handles all the protocol complexity. Your job is defining tools with clear descriptions, writing handlers that call your API, and returning structured results.

The Unmarkdown™ MCP server is open source on GitHub if you want a production reference. It demonstrates all the patterns in this post: API client abstraction, structured error handling, tool annotations, server instructions, and stdio transport.

Building your first MCP server takes an afternoon. Getting the tool descriptions right takes a week of iteration. That's where the real work is: not in the code, but in the language you use to tell Claude what your tools do. The SDK makes the protocol invisible. Your descriptions make the tools usable.

Your markdown deserves a beautiful home.

Start publishing for free. Upgrade when you need more.

View pricing