The Model Context Protocol (MCP) is an open specification published by Anthropic on 25 November 2024. It defines how a host application running an LLM (the client) communicates with external services (the servers) that expose tools, resources and prompts. MCP plays the role for AI agents that the Language Server Protocol plays for code editors: a single standard so that any client can talk to any server.
Primitives. A server may expose three kinds of capability.
- Tools are model-invokable functions, described by JSON schemas. The client surfaces them to the model, which decides when to call them.
- Resources are read-only data items (files, database rows, API responses) addressable by URI and presented as context.
- Prompts are user-invokable templates, typically surfaced as slash commands.
The transport is JSON-RPC 2.0 over stdio or HTTP with Server-Sent Events. Authentication is delegated to the transport (OAuth 2.1 in HTTP mode, environment variables in stdio mode).
Discovery and lifecycle. On connection, the client and server negotiate protocol version and exchange capability lists. The client can subscribe to resource changes, request prompts, and call tools, all with structured progress and error reporting. Servers can sample from the client's model via a sampling API, which lets a server delegate sub-LLM calls back to the client's chosen model.
Why it matters. Before MCP, each LLM application implemented its own ad-hoc integration layer for files, GitHub, Slack, databases, browsers and so on. MCP lets a single server be reused across Claude Desktop, Cursor, Zed, VS Code, ChatGPT, Gemini and any other compliant host. Conversely a host gets immediate access to a growing ecosystem of community servers without writing per-tool code.
Adoption. Anthropic launched MCP with reference servers for filesystem, Git, GitHub, Slack, Postgres, Puppeteer and Brave Search. Through 2025 adoption broadened sharply: OpenAI added MCP support to ChatGPT and the Agents SDK in early 2025, Google added MCP support to Gemini and Vertex AI mid-year, and Microsoft integrated it into VS Code and Copilot Studio. By early 2026 MCP is the de facto interoperability layer for agent tooling, with thousands of public servers covering productivity tools, scientific software, browsers, databases and home automation.
Security. MCP servers run with the privileges of the user who launched them, so they expand the agent's blast radius. Best practice is to run servers in sandboxes, scope credentials narrowly, log every tool invocation, and use the protocol's roots mechanism to constrain filesystem access. Hosts increasingly add per-server consent prompts and policy controls.
MCP is the connective tissue of the agentic-AI ecosystem: a small, deliberately boring protocol that lets the rest of the stack compose.
Video
Related terms: Claude 3.5 Sonnet Computer Use, Claude 4 Family, Devin / AI Software Engineer
Discussed in:
- Chapter 16: Ethics & Safety, Agents