Trade notes
AI lab
04MCP & Tools· 7 min read

MCP and tool use: the protocol the agent era settled on

The Model Context Protocol went from niche Anthropic spec to industry default in eighteen months. Why that happened, what it actually solves, and where the rough edges are in 2026.

The most important thing that happened to AI tooling in 2025 wasn't a new model. It was the quiet adoption of Model Context Protocol (MCP) as the de-facto standard for connecting agents to tools and data. Anthropic introduced it in late 2024. By mid-2026 the community has built thousands of MCP servers, every major SDK speaks it, and competing frameworks have either added MCP compatibility or quietly stopped getting attention.

For enterprise architects, this is the kind of standardization that changes what's worth building.

What MCP actually is

MCP is a small protocol that defines how an AI model (or an agent on top of one) discovers and calls tools, resources, and prompts exposed by external servers. The wire format is JSON-RPC. The semantics are deliberately boring: a server lists what it offers, the client calls what it wants, results come back in a typed shape.

The clever move was scoping: MCP doesn't try to be an agent framework, a data lake, or an orchestration layer. It just standardizes the seam between models and the world. Because the seam is small, almost everyone could agree on it.

Why it won

Three reasons:

  • Composability. A team that ships an MCP server for, say, your ticketing system makes that system available to every agent (Claude, OpenAI, internal frameworks, third-party tools) without writing a custom integration for each one. The marginal cost of a new agent surface drops to near zero.
  • Auditability. Tool use through a standard protocol is loggable in a standard way. Compare a custom curl-from-a-prompt to a typed MCP call: the latter is something a security team can actually review.
  • Vendor independence. Enterprises burned by being locked into a specific assistant's plugin spec in 2023 didn't want to repeat it. MCP being open and multi-vendor was a political feature as much as a technical one.

What changed in 2026

Two developments worth tracking:

Code execution becomes a first-class MCP pattern. Anthropic's code execution with MCP work formalized a pattern that was emerging anyway: an agent writes code in a sandbox and uses that code to call the MCP servers, rather than the model calling tools directly turn-by-turn. The token savings are real (one code block can orchestrate ten tool calls in one round-trip), and the auditability story is better, not worse. The code is the artifact you log.

Folder structure becomes context. A subtle but important shift: agents that persist state between sessions are increasingly using the file system layout itself as a form of memory. An email-handling agent might have a Conversations/ folder it can search; a research agent might have Notes/ and Sources/. This is "context engineering" in the unfashionable sense, managing what the model gets to see, and it scales further than stuffing everything into the prompt.

Combined: the 2026 agent isn't a clever prompt with tools wired in. It's a small program that knows where to look on disk and how to call the few MCP servers it needs.

The Claude Agent SDK rebrand

Anthropic renamed the Claude Code SDK to the Claude Agent SDK in early 2026 to reflect the broader scope. It's the framework the Cowork-style long-horizon agents are built on. MCP is the tool-use mechanism inside it. The same SDK now powers research agents, coding agents, and the Cowork sessions running inside Microsoft 365: same primitives, different prompts, different toolsets.

The takeaway for an architect choosing infrastructure: betting on MCP-first agent stacks gives you the most optionality. You can swap orchestration frameworks, swap models, even swap providers, and the tool layer stays put.

Where the rough edges still are

MCP is good but not finished. The honest list of pain points in 2026:

  • Auth is per-server and inconsistent. Some servers want OAuth, some want API keys, some want a token in a header. Standardization here is happening but slowly.
  • Discovery is manual. No public registry of trusted MCP servers exists with the maturity of a package manager. Enterprises are running internal registries to control what their agents can reach. Reasonable, but adds operational overhead.
  • Cost shaping is the user's problem. A chatty MCP server can blow up token usage with verbose responses. The protocol doesn't help you cap that. Your agent code does.
  • Long-running tools. Anything that takes more than a few seconds is awkward. Streaming and async patterns exist but aren't uniform.

None of these are deal-breakers. They are the normal post-standardization cleanup that follows any successful protocol.

In your M365 environment

Two practical things to know:

  • Microsoft 365 increasingly speaks MCP under the hood. Custom Copilot connectors are being aligned with MCP semantics; Cowork can call MCP servers as tools. If you've spent the last two years building Power Platform connectors, the MCP path is additive: same data, additional surface for agents that don't live inside Microsoft's stack.
  • Your security review needs an MCP chapter. When a line-of-business team asks to install a third-party MCP server in your tenant, you need a process to evaluate it the way you'd evaluate any other integration: what data does it touch, what scopes does it require, how is it logged, who maintains it. Most orgs don't have this yet. Build it before you need it.

MCP is one of those rare protocols where the correct response from an enterprise architect is to lean in early. The cost of being late is high, the cost of being early is low, and the network effects are obvious.


Sources: Anthropic: Code execution with MCP · Anthropic: Building agents with the Claude Agent SDK · Model Context Protocol: Wikipedia overview · Building with MCP: Anthropic guidance