Model Context Protocol
Learn what Model Context Protocol (MCP) means in AI and machine learning, with examples and related concepts.
Definition
Model Context Protocol (MCP) is an open standard created by Anthropic that defines how AI models connect to external tools, data sources, and services. Think of it as a universal adapter — instead of every AI app building custom integrations for every service, MCP provides one standard protocol that works everywhere.
Before MCP, connecting an AI model to (say) a Slack workspace required custom code for each AI platform. With MCP, a Slack integration built once works with Claude, Claude Code, and any other MCP-compatible client. It’s the same idea as USB — one standard plug instead of proprietary cables for every device.
MCP is already supported by Claude Code, Cursor, Windsurf, and other AI coding tools. It defines three types of capabilities: tools (actions the model can take), resources (data the model can read), and prompts (templates the model can use).
How It Works
MCP Architecture:
┌─────────────────┐ MCP Protocol ┌──────────────────┐
│ MCP Client │ ←──────────────────→ │ MCP Server │
│ (AI app) │ JSON-RPC over │ (integration) │
│ │ stdio or HTTP │ │
│ Claude Code │ │ Slack server │
│ Cursor │ │ GitHub server │
│ Your custom app │ │ Database server │
└─────────────────┘ └──────────────────┘
The client (AI app) discovers what tools are available,
then the model decides when to call them.
The Three Primitives
1. TOOLS — Actions the model can execute
Example: "send_slack_message", "create_github_issue"
→ Model calls these to do things
2. RESOURCES — Data the model can read
Example: "file://project/README.md", "db://users/table"
→ Model reads these for context
3. PROMPTS — Templates for common interactions
Example: "code-review", "summarize-thread"
→ Pre-built prompt templates with parameters
Communication
MCP uses JSON-RPC 2.0 over two transports:
- stdio — For local servers (run as a child process)
- HTTP + SSE — For remote servers (hosted services)
Why It Matters
- Universal compatibility — Build a tool integration once, use it with any MCP-compatible AI app
- Growing ecosystem — Hundreds of MCP servers already exist for popular services (Slack, GitHub, databases, etc.)
- Developer experience — Claude Code uses MCP for all its file system, search, and shell integrations
- Security — MCP defines capability negotiation and permission models, so tools declare what they can do
- Open standard — Anyone can build MCP servers or clients. Not locked to one AI provider.
Example
// claude_desktop_config.json — Configure MCP servers for Claude
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "ghp_your_token_here"
}
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
}
}
}
# Build a simple MCP server in Python
from mcp.server import Server
from mcp.types import Tool, TextContent
server = Server("my-tools")
@server.list_tools()
async def list_tools():
return [
Tool(
name="word_count",
description="Count words in a text",
inputSchema={
"type": "object",
"properties": {
"text": {"type": "string", "description": "Text to count words in"}
},
"required": ["text"]
}
)
]
@server.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "word_count":
count = len(arguments["text"].split())
return [TextContent(type="text", text=f"Word count: {count}")]
# Run the server
if __name__ == "__main__":
import asyncio
from mcp.server.stdio import stdio_server
async def main():
async with stdio_server() as (read, write):
await server.run(read, write)
asyncio.run(main())
// Build an MCP server in TypeScript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const server = new McpServer({ name: "my-tools", version: "1.0.0" });
server.tool(
"word_count",
"Count words in a text",
{ text: { type: "string", description: "Text to count words in" } },
async ({ text }) => ({
content: [{ type: "text", text: `Word count: ${text.split(/\s+/).length}` }]
})
);
const transport = new StdioServerTransport();
await server.connect(transport);
MCP vs Other Approaches
| Approach | Scope | Standard? | Example |
|---|---|---|---|
| MCP | Universal tool protocol | Open standard | Any tool, any AI client |
| OpenAI Functions | OpenAI tool calling | Proprietary | GPT function calling |
| LangChain Tools | Python framework tools | Framework-specific | LangChain agents |
| Custom REST APIs | One-off integrations | Ad hoc | Each app builds its own |
Key Takeaways
- MCP is an open standard for connecting AI models to external tools and data — built by Anthropic
- It defines three primitives: tools (actions), resources (data), and prompts (templates)
- Build an MCP server once, and it works with Claude Code, Cursor, and any MCP-compatible client
- The ecosystem already includes servers for GitHub, Slack, databases, file systems, and hundreds more
- MCP is to AI tools what USB is to hardware — a universal standard replacing proprietary integrations
Part of the DeepRaft Glossary — AI and ML terms explained for developers.