MCP: The Protocol That Unified AI Tooling

Fourteen months after launch, the Model Context Protocol has 97 million monthly SDK downloads and backing from OpenAI, Google, Microsoft, and Anthropic. Here's how it works.

AI InfrastructureAI InfrastructureDeveloper ToolsOpen SourceProtocols

Every AI company was building the same thing. Claude needed GitHub integration. ChatGPT needed Slack integration. Cursor needed database integration. Multiply every AI client by every tool and data source, and you get what developers call the "n×m problem": an explosion of one-off integrations that nobody wanted to maintain.

The Model Context Protocol fixed this. MCP gives AI agents a universal language for accessing tools, reading data, and executing functions. Write one MCP server for your database; every compliant AI client can use it. The math flips from n×m to n+m.

Fourteen months after its November 2024 launch, MCP has 97 million monthly SDK downloads, over 10,000 active public servers, and adoption by Claude, ChatGPT, Cursor, Gemini, Microsoft Copilot, and VS Code.

GitHub reports that 1.13 million repositories now import LLM SDKs, with 178% year-over-year growth in that category. That's not adoption. That's infrastructure.

Three Layers, One Protocol

The architecture breaks into hosts, clients, and servers.

Hosts are the LLM applications users interact with directly: Claude Desktop, ChatGPT, VS Code with Copilot. Clients live inside hosts, managing connections to MCP servers and handling protocol negotiation. Servers expose the actual capabilities. A GitHub server might offer tools for creating issues and reading files. A database server might expose query functions. A CRM server might provide customer context.

All of this runs on JSON-RPC 2.0, the same messaging protocol that powers the Language Server Protocol for IDE integrations. The spec explicitly draws that inspiration: just as LSP standardized how editors talk to language analyzers, MCP standardizes how AI agents talk to external services.

MCP servers can expose three types of capabilities. Resources are data the AI can read but not modify: files, database records, API responses. The model consumes these for context. Tools are functions the AI can execute: creating a calendar event, sending a Slack message, running a database query. Tools require explicit user consent before execution, and this is non-negotiable in the spec. Prompts are templated workflows that servers can suggest; a code review server might expose a "review this PR" prompt that structures how the model should approach the task.

There's also a reverse capability worth noting. Sampling lets servers request completions from the host's LLM, enabling agentic patterns where a server can ask the model to reason about something before returning a result. Like tools, sampling requires explicit user consent.

Local and Remote

Two transport mechanisms. stdio handles local processes: your MCP server runs as a subprocess, communicating through standard input and output. Fast, simple, no network overhead. Streamable HTTP handles remote servers. Earlier versions used Server-Sent Events, but the November 2025 spec update deprecated SSE in favor of a cleaner streaming approach. Connections are stateful, with capability negotiation on initialization.

Pento's production retrospective notes that the November 2025 spec also added async operations, statelessness options, server identity verification, and an official discovery registry. These are the kinds of enterprise-grade additions you see when a protocol matures from experiment to infrastructure.

The obvious comparison is APIs. We already have REST, GraphQL, and countless tool-calling conventions. So why another protocol?

The answer comes down to context negotiation. Traditional APIs assume the client knows exactly what it wants. MCP assumes the opposite: an AI agent exploring a capability surface, discovering what's available, and deciding at runtime what to use. This is why MCP connections start with capability negotiation. The server announces what it can do. The client announces what it understands. They agree on a common feature set. This handshake enables forward compatibility; new capabilities can be added without breaking older clients.

The second difference is safety constraints. MCP bakes user consent into the protocol itself. Servers cannot execute tools or sample from the LLM without the host obtaining permission.

Not optional. The spec treats security as a first-class architectural concern.

Lessons from Pento's Year in Production

Pento deployed MCP throughout 2025 and documented some practical lessons worth knowing.

First: expose fewer, more specialized tools. Agents perform better when tools do one thing well rather than offering broad, ambiguous capabilities. Tool descriptions directly impact model performance; vague descriptions produce vague usage. Second: default to read-only. Give agents the minimum permissions they need. Write access should be explicit, not assumed.

Third, and this one matters: security gaps remain. Pento documents authentication vulnerabilities, prompt injection risks, OAuth token storage hazards, and what they call "toxic agent" data exfiltration chains. MCP has a security model. Production deployments have revealed where it needs strengthening.

Our read: MCP is mature enough for production, but teams shipping it need to treat security as an active engineering concern, not a checkbox.

Governance Gets Interesting

The story takes an unexpected turn here. Anthropic created MCP, but they didn't keep it.

In December 2025, Anthropic donated MCP to the Agentic AI Foundation, a new directed fund under the Linux Foundation. The co-founders are Anthropic, OpenAI, and Block. Platinum members include AWS, Google, Microsoft, Bloomberg, and Cloudflare.

Read that list again. Anthropic and OpenAI co-governing a shared protocol. Google and Microsoft both signing on. This level of competitor cooperation on AI infrastructure is unprecedented.

The Linux Foundation announcement frames the mission as ensuring agentic AI evolves "transparently, collaboratively, and in the public interest." The practical effect: no single vendor can claim ownership or steer the spec toward their advantage. MCP becomes neutral infrastructure. The Foundation also houses two other projects: AGENTS.md from OpenAI (conventions for AI agents interacting with repositories) and goose from Block (an open agent framework).

One protocol MCP doesn't compete with: Google's Agent2Agent. MCP connects agents to tools and data. A2A connects agents to other agents. As we covered in our A2A explainer, think of MCP as the agent's hands (how it manipulates the world) and A2A as the agent's mouth and ears (how it coordinates with peers). Enterprises deploying serious agent infrastructure will likely need both.

MCP succeeded because it solved a problem everyone had, launched with working implementations, and transitioned to neutral governance before resentment could build. The sequence matters. If Anthropic had tried to establish a foundation before proving the protocol worked, it would have been an empty gesture. If they had kept control too long, competitors would have built alternatives. The timing was right. The protocol was right. And critically, the governance transition was right.

For developers, none of this political maneuvering matters except for one thing: MCP is now safe to bet on. It has the adoption, the governance structure, and the backing of every company that matters in AI infrastructure. Build your integrations against MCP and they'll work everywhere that matters.

Sources cited: Claims as analysis:

Frequently Asked Questions