Model Context Protocol (MCP) is the USB-C of AI tooling. One standard that lets Claude, Cursor, Windsurf, and a growing list of clients connect to your custom tools, APIs, and data sources. Build an MCP server once — wire it to every AI tool in your stack.
This is the hands-on build. We’ll go from zero to a working MCP server in Node.js that you can connect to Claude Desktop or Cursor in under an hour.
What MCP Actually Is
Before the build: a one-paragraph mental model.
An MCP server is a lightweight process that runs alongside your AI client. It exposes tools (functions the LLM can call), resources (file-like data it can read), and prompts (reusable templates). The LLM client discovers these capabilities, and when a user’s request requires one, the client calls your server — transparently, mid-conversation.
The official MCP spec was published by Anthropic in late 2024 and has since been adopted by Cursor, Windsurf, VS Code Copilot, and others. The 2026 roadmap focuses on enterprise readiness, transport scalability, and agent-to-agent communication — it’s not going away.
What We’re Building
A Node.js MCP server that exposes two tools:
get_weather— fetches current weather for a city (calls a real API)run_shell_command— executes a shell command and returns stdout (scoped to safe commands)
By the end you’ll have this connected to Claude Desktop. The pattern generalizes to any tool you want to expose.
Setup
mkdir my-mcp-server && cd my-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node tsx
Initialize TypeScript:
npx tsc --init
Update tsconfig.json:
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./dist",
"strict": true
}
}
The official TypeScript SDK (v1.x — v2 is pre-alpha as of March 2026) handles all the MCP protocol communication. You write tool definitions and handlers; the SDK manages the JSON-RPC transport.
The Server
Create src/index.ts:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import { exec } from "child_process";
import { promisify } from "util";
const execAsync = promisify(exec);
// Create the server
const server = new McpServer({
name: "my-mcp-server",
version: "1.0.0"
});
// ── Tool 1: Get Weather ──────────────────────────────────────────────────────
server.tool(
"get_weather",
"Get current weather conditions for a city",
{
city: z.string().describe("The city name, e.g. 'New York' or 'Tokyo'"),
units: z.enum(["celsius", "fahrenheit"]).default("celsius")
},
async ({ city, units }) => {
// Using open-meteo — no API key required
// https://open-meteo.com/
const geocodeRes = await fetch(
`https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(city)}&count=1`
);
const geocode = await geocodeRes.json() as any;
if (!geocode.results?.length) {
return { content: [{ type: "text", text: `Could not find location: ${city}` }] };
}
const { latitude, longitude, name, country } = geocode.results[0];
const tempUnit = units === "fahrenheit" ? "fahrenheit" : "celsius";
const weatherRes = await fetch(
`https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t_weather=true&temperature_unit=${tempUnit}`
);
const weather = await weatherRes.json() as any;
const current = weather.current_weather;
return {
content: [{
type: "text",
text: JSON.stringify({
location: `${name}, ${country}`,
temperature: `${current.temperature}°${units === "fahrenheit" ? "F" : "C"}`,
windspeed: `${current.windspeed} km/h`,
weathercode: current.weathercode
}, null, 2)
}]
};
}
);
// ── Tool 2: Run Shell Command ────────────────────────────────────────────────
const ALLOWED_COMMANDS = ["ls", "pwd", "echo", "cat", "date", "whoami", "uname"];
server.tool(
"run_shell_command",
"Execute a safe shell command and return the output. Only whitelisted commands are allowed.",
{
command: z.string().describe("The shell command to run"),
},
async ({ command }) => {
const baseCommand = command.trim().split(" ")[0];
if (!ALLOWED_COMMANDS.includes(baseCommand)) {
return {
content: [{
type: "text",
text: `Command not allowed: ${baseCommand}. Allowed commands: ${ALLOWED_COMMANDS.join(", ")}`
}],
isError: true
};
}
try {
const { stdout, stderr } = await execAsync(command, { timeout: 5000 });
return {
content: [{
type: "text",
text: stdout || stderr || "(no output)"
}]
};
} catch (err: any) {
return {
content: [{ type: "text", text: `Error: ${err.message}` }],
isError: true
};
}
}
);
// ── Start the server ─────────────────────────────────────────────────────────
const transport = new StdioServerTransport();
await server.connect(transport);
// MCP servers communicate over stdio — never log to stdout
// Use stderr for debugging:
console.error("MCP server running");
Critical note on logging: MCP servers using stdio transport communicate via JSON-RPC over stdout. Any console.log() that writes to stdout will corrupt the protocol stream. Use console.error() for all debug output — it goes to stderr and doesn’t interfere.
Add a Build Script
Update package.json:
{
"scripts": {
"build": "tsc",
"dev": "tsx src/index.ts",
"start": "node dist/index.js"
},
"type": "module"
}
Build it:
npm run build
Test it manually:
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' | node dist/index.js
You should see a JSON response listing get_weather and run_shell_command.
Connect to Claude Desktop
Find or create the Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add your server:
{
"mcpServers": {
"my-mcp-server": {
"command": "node",
"args": ["/absolute/path/to/my-mcp-server/dist/index.js"]
}
}
}
Restart Claude Desktop. You’ll see a hammer icon in the chat input — that’s your tools. Ask: “What’s the weather in Tokyo right now?” and watch Claude call your server.
Connect to Cursor
In Cursor settings, navigate to Features → MCP Servers and add:
{
"my-mcp-server": {
"command": "node",
"args": ["/absolute/path/to/dist/index.js"]
}
}
MCP tools in Cursor show up in the Composer context. You can ask Cursor to check the weather, run commands, or call any custom tool you’ve defined — all within your coding session.
For a complete setup walkthrough, the DEV Community guide on MCP in Cursor covers edge cases around remote servers and auth.
Adding Resources
Tools are for actions. Resources are for data your LLM can read. Here’s a resource that exposes a config file:
import { readFileSync } from "fs";
server.resource(
"project-config",
"file:///project/config.json",
async (uri) => {
try {
const content = readFileSync("./config.json", "utf-8");
return {
contents: [{
uri: uri.href,
mimeType: "application/json",
text: content
}]
};
} catch {
throw new Error("Config file not found");
}
}
);
Resources let the LLM read files, database records, API responses — anything it needs as context without you having to paste it manually.
Error Handling and Production Patterns
Three things that matter when your server is running in a real workflow:
Always validate inputs with Zod. The SDK uses your schema for validation automatically — malformed inputs get rejected before reaching your handler. Don’t add manual validation on top; trust Zod.
Return isError: true for recoverable errors. This tells the LLM the tool call failed and it can try a different approach or explain the error. Throwing an exception crashes the tool call entirely.
Keep tool descriptions precise. The LLM reads your description field to decide when to call the tool. Vague descriptions lead to wrong or unnecessary calls. “Get weather for a city” is better than “weather tool.”
What to Build Next
The weather/shell example is a scaffold. Real MCP servers that developers are running today:
- Database MCP server — expose Postgres/SQLite query tools, let Claude write and run queries against your schema
- GitHub MCP server — Anthropic ships an official one — PRs, issues, code search, all accessible from Claude
- Internal docs server — index your team’s Notion/Confluence into a resource, let your AI tools query it without copy-pasting
- Stripe/billing server — expose customer lookup, subscription status, usage metrics to your agentic workflows
MCP is also increasingly relevant to the agentic coding workflow — tools like Cursor and Windsurf use MCP to give the AI agent access to your actual codebase, running tests, and deployment systems.
For the revenue angle: MCP servers packaged as products are starting to appear on Clawhub and similar marketplaces. A well-built MCP server for a popular SaaS API (Salesforce, HubSpot, Shopify) is a legitimately sellable product to development teams that want the capability without building it themselves. That market is early.
The protocol is simple enough to build in an afternoon. The value is in which tools you expose.