Anthropic today released a new open source protocol to let all AI systems, not just its own, connect with data sources via a standard interface.
Model Context Protocol (MCP), the company said in its announcement, lets developers build secure two-way connections between AI-powered tools and the data sources they require to do their jobs via a client-server architecture.
“As AI assistants gain mainstream adoption, the industry has invested heavily in model capabilities, achieving rapid advances in reasoning and quality. Yet even the most sophisticated models are constrained by their isolation from data—trapped behind information silos and legacy systems. Every new data source requires its own custom implementation, making truly connected systems difficult to scale,” Anthropic said. “MCP addresses this challenge. It provides a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol. The result is a simpler, more reliable way to give AI systems access to the data they need.”
Monday’s announcement was threefold: The company introduced the MCP spec and software development kits (SDKs), launched local MCP server support in its Claude Desktop apps, and provided an open source repository of MCP servers, including prebuilt servers for Slack, GitHub, SQL databases, local files, search engines, and other data sources.
Anthropic said that development tool vendors such as Replit and Codeium are adding support for MCP, and Zed, Sourcegraph, Block, and Apollo have already done so.
How MCP works
“The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers,” Anthropic said in its post.
There are three components to an MCP connection:
- Hosts – LLM applications such as Claude Desktop that initiate connections;
- Clients – Systems that maintain 1:1 connections with servers, inside the host application;
- Servers – Systems that provide context, tools, and prompts to clients.
Core components include a protocol layer to handle message framing, request/response linking, and high-level communication patterns and a transport layer to deal with communication between client and server.
At the moment there are two SDKs available: one for TypeScript, and one for Python. Anthropic also provides plenty of documentation on getting started and a GitHub repository of reference implementations and community contributed servers.
Currently, MCP only talks to servers running on a local computer, but Alex Albert, head of Claude relations at Anthropic, said in a post on X that work is in progress to allow for remote servers with enterprise-grade authentication.
“An MCP server shares more than just data as well. In addition to resources (files, docs, data), they can expose tools (API integrations, actions) and prompts (templated interactions),” he added. “Security is built into the protocol — servers control their own resources, there’s no need to share API keys with LLM providers, and there are clear system boundaries.”
Anthropic said that developers can start building and testing MCP connectors today, and existing Claude for Work customers can test MCP servers connecting to internal systems and data sets. And, the company promised, “We’ll soon provide developer toolkits for deploying remote production MCP servers that can serve your entire Claude for Work organization.”