{
"name": "get_weather",
"description": "Get weather information for a city", "parameters": {
"city": { "type": "string" }
}
}
The AI Agent Protocol Wars: How MCP and A2A Are Reshaping Startup Opportunities
The AI agent ecosystem is rapidly evolving around two critical protocols: MCP and A2A.
Recent analysis has identified five key conditions necessary for LLMs to evolve into true AI agents, with Connection and Customization emerging as the primary areas where startups can still find meaningful opportunity.
This analysis examines how MCP and A2A â both "just protocols" on the surface â are emerging as de facto standards, and how they're accelerating the path toward fully realized AI agents.
What is MCP(Model Context Protocol)?
Whenever the topic of AI agents comes up, youâll often hear the acronym MCP â short for Model Context Protocol. At its core, itâs not anything too flashy.
How MCP Enables AI Agents to Use External Tools
MCP is a standardized protocol released by Anthropic in November 2023. Simply put, itâs a kind of instruction manual that allows an LLM to interact with external resources and tools. Much like HTTP or APIs for the web, MCP defines how an LLM can call, use, and exchange data with tools.
If traditional LLMs were like an old-fashioned scholar dispensing wisdom from a chair, the advent of MCP finally enabled them to pick up and use tools directly. Still not clicking? Letâs then take a closer look at the structure of an AI agent and see exactly what role MCP plays.
MCP itself is just a small connector between an LLM and external tools - one component in the larger architecture of an AI agent. It doesnât do much on its own; itâs closer to a manual than a machine. If youâre thinking, âSo whatâs the big deal?â â youâre actually spot on. Until March of this year, MCP drew almost no attention at all.
But thatâs often how technology works. MCP is a bit like a USB-C cable: nothing glamorous on its own, but indispensable once everything relies on it. Just as you need an adapter to connect an old port to a new one, AI agents, tools, and models also need a small, standardized interface to connect seamlessly.
Why Cursor's MCP Integration Changed Everything
Earlier this year, Cursor introduced MCP support on its platform, powered by Claude LLM. Developers now only need to define tools in the MCP format, and Claude can automatically recognize and call those tools as needed. Cursor, in turn, provides a visual UI for the execution process and even allows user interaction.
Think of it like this: MCP is the textbook, Claude is the student who can read it, and Cursor is the classroom where the student can act and speak.
In other words, MCP has, for the first time, established a 'functioning ecosystem'. What was once merely a technical specification has, with Cursorâs support, begun operating as a practical standard that connects LLMs with tools in real-world use. This marks the turning point where MCP evolved from a simple protocol into the âlanguage of the AI ecosystem.â
More recently, beyond Cursor, a growing number of developer tools and platforms have begun to adopt MCP as a core integration standard. MCP is evolving beyond a simple tool-calling mechanism, establishing itself as a standard for defining and sharing tools, and enabling multiple agents to use them collaboratively.
Emerging trends include:
MCP integration in AI marketplaces
MCP + OSS toolkit Integration: Creating a personalized MCP-driven agent environment
MCP extension proposals (MCP-Mod, MCP-A2A, etc.)
In other words, MCP has evolved from a simple manual into a standardized execution language covering the entire flow: tool definition â execution â sharing â collaboration.
At this stage, we are in the early diffusion phase, where the prevailing sentiment is, âWeâll support MCP â come build with us.â Developers and companies alike are entering the market under this collaborative momentum.
MCP vs SDK: Why AI Needs a New Approach
The natural first question is: âSo, how did things work before MCP?â To answer that, letâs begin with the concept of an SDK.
An SDK (Software Development Kit) is essentially a toolkit that enables developers to integrate specific functions or services into their applications. For example, if you wanted to add Google Translate to your app, you could use Googleâs SDK to do so easilyâwithout having to implement complex language-processing features from scratch.
But hereâs the catch: SDKs have always been designed with human developers in mind.
Now, in an era where itâs not just humans but AI models â LLMs â that need to access external tools directly, SDKs quickly show their limitations.
Imagine Claude or GPT needing to search the web, read a Notion document, or execute code. Under the SDK model, every framework (LangChain, AutoGen, and others) requires its own custom-built SDK. Whenever a tool gets updated, the SDK has to be revised as well. And crucially, a human still has to manually connect, maintain, and test the integration each time.
In that sense, for developers who are fast and efficient at setting up SDKs, MCP might not seem necessary.
Ultimately, MCP can be seen as a standardized âtool description formatâ designed to solve the very problems we just discussed. Instead of being written for human developers, itâs structured so that AI can read it directly: MCP defines tools in a JSON-RPCâbased request/response schema, which resembles a function call. This makes it far easier for an LLM to parse, reason about, and incorporate into its logical planning.
Thanks to MCP, an LLM can now read a tool description like the example below, understand what the tool does, determine the required inputs, and then call it autonomously.
âïž
Most importantly, this format is not tied to any single framework. For tool providers, this means they no longer need to maintain separate SDKs for LangChain, AutoGen, and others. With MCP, offering a single specification is enough â AI can interpret and use the tools on its own.
SDK | MCP | |
---|---|---|
Target | Human developers | LLM (AI) |
Integration | Separate SDKs per framework | Unified via MCP |
Maintenance | SDK must be updated per tool change | Update MCP specs only |
Flexibility | Platform-dependent | Platform-independent |
So, MCP served as a standard that enabled LLMs to use external tools. But now, LLMs need to collaborate not only with tools, but also with other agents. (In fact, you could argue this is already happening!)
What is A2A(Agent-to-Agent Protocol)?
From Solo AI Agents to Collaborative Teams
If MCP provided a standardized format that allowed LLMs to use external tools, then A2A (Agent-to-Agent) can be seen as a conversational protocol designed to enable AI agents to collaborate and coordinate with one another.
Why has this become necessary?
Because we are now entering an era where LLMs must go beyond simply leveraging toolsâthey need to interact with other LLMs and agents, working together to accomplish tasks.
Why Multi-Agent Systems Are the Future
Until now, the prevailing model has been a single LLM leveraging multiple tools to solve problems. But as tasks grow increasingly complex and interdependent, it is becoming clear that one agent alone cannot shoulder every role effectively.
Take, for example, the case of building a recruiting assistant agent. If a single agent were tasked with collecting rĂ©sumĂ©s, conducting background checks, scheduling interviews, and generating summary reports all on its own, clear limitations would emerge. This is precisely where the concept of A2A comes inâa protocol that enables agents with distinct roles to collaborate.
In short, A2A is a set of message-exchange rules designed to let multiple LLMs divide responsibilities and work together. If MCP is the language for AI-to-tool interaction, then A2A can be thought of as the grammar for Agent-to-Agent dialogue.
How A2A Protocol Enables Agent Coordination
It's simple. Each agent defines its capabilities as a ârole,â exchanges messages with other agents, and through collaboration, they work together to achieve a larger goal.
Imagine, for instance, making a request on the Agentspace platform such as, âRecommend a frontend engineer.â The task could then be relayed across agentsâfirst a rĂ©sumĂ© collection agent, then a social media analysis agent, followed by an interview summary agent, and finally a reporting agent.
Each agent autonomously collaborates through state sharing and goal alignment. In effect, it resembles a team project: dividing roles, sharing intent and objectives, and working together toward a common outcome.
And importantly, A2A is more than just a concept. Open-sourced by Google in early 2025, the A2A protocol has already attracted participation from more than 50 tech partners, with major frameworksâincluding Agentspace, LangChain, DSPy, and Replitâexperimenting with its adoption.
Whereas MCP standardized tool integration, A2A is now structuring the conversations, coordination, and collaboration among agents themselves.
A2A vs MCP: Tools vs Teammates Comparison
The difference between MCP and A2A can be summed up in one phrase: from tools to teammates. MCP gave AI the ability to use tools, while A2A gives AI the ability to collaborate and coordinate. In other words, they target entirely different dimensions.
MCP | A2A | |
---|---|---|
Purpose | External tool calls | Agent-to-agent collaboration |
Method | JSON-RPC tool specs | Structured message exchange |
Unit | LLM â Tool | Agent â Agent |
Examples | Cursor, Claude, Slack GPT Plugins | Agentspace, CrewAI, Autogen |
Weâve moved from creating agents that work well alone to building teams of agents that work well together. MCP and A2A are emblematic of this shift, and they are poised to become the foundational grammar of the AI ecosystem moving forward
MCP and A2A:
AI Agent Startup Opportunities in 2025
Future agent services will be built around the MCP and A2A protocolsâconnecting, conversing, and working with one another. We are moving beyond simple calls into an era of meaningful interactions.
As mentioned briefly in the introduction, of the two pillarsâConnection and Customizationâthe former has seen significant progress with the advent of MCP and A2A. The remaining frontier, then, is Customization: how AI agents are tailored and applied to the unique needs of each organization and service.
How MCP and A2A Lower Startup Entry Barriers
The emergence of MCP and A2A represents a technological inflection point. It signals a shift away from reliance on large platforms and toward an environment where even small teams can participate in the agent ecosystem with a single, well-crafted function.
In the past, building a single product required developing the entire stackâUX, back office, deployment, and analytics. Today, by contrast, market entry can be achieved with just one well-designed agent focused on a core function. In other words, even small teams can now enter the market with âstandaloneâ execution-ready agent units.
Building AI Agents Without Platform Dependencies
Until quite recently, agents could operate only within hubs like OpenAI or LangChain. With the arrival of MCP, however, models now have a common standard for calling and composing tools directly.
Thanks to that, agents built independently can be distributed in the market directly without relying on a central platform. Looking ahead, if an Agent Marketplace emerges in full, functional agents created by startups could be deployed and consumed as standalone services.
The Competitive Landscape for AI Agent Startups
The fact that anyone can now build an agent also means competition is intensifying. Yet it doesnât take excessive optimism to think of this as an opportunity and not a disadvantage - startups now get to play in a market with clear rules, a level field where technology defines true competence.
Why Early AI Agent Platform Adoption Matters
In the beginning, the A2A era may appear highly decentralized. Yet in the long run, it is likely that the entity controlling the connections, distribution, and evaluation standards for agents will evolve into the new dominant platform. The trajectory is reminiscent of how AppStore developed in the past.
This is why the current landscape should be seen as an early-stage ecosystemâone where a wide variety of creative agents can emerge. For startups, this moment represents a true window of opportunity to establish themselves quickly.
In this context, it may be far more strategic not to focus on building a fully finished SaaS, but rather to identify the problems that can be âagentizedâ first.
For now, MCP and A2A are distributed as open source rather than as purchase-based marketplaces. And in practice, challenges remain around installation, authentication, monitoring, logging, and version control. It is precisely at this junction, however, that new opportunities for startups are likely to emerge, including:
Four AI Agent Business Models to Watch
MCP-Notion connector SaaS (with no-code setup, monitoring, admin dashboards)
Agent orchestration platforms for managing multi-agent workflows
Connector-layer SaaS integrated with services such as Slack, Notion, Vercel, and etc.
Token-based monetization models once A2A evolves into paid call/settlement systems
Even as the technology becomes standardized, the real battle will still lie in configuration, operations, and customization. Now that the floodgates of Connection have been opened, the next stage will be decided by how quicklyâand how seamlesslyâthis massive shift can be applied in practice.
MCP and A2A are less about breakthrough technologies in themselves and more akin to a common languageâone that makes the AI agent ecosystem accessible for anyone to assemble and extend. And right now, we are at the very moment when that language is beginning to circulate and its grammar is still being written.
As the market is still in its early stages, there remain more opportunities for startups than one might expect. The teams that recognize this momentum, experiment faster than anyone else, and innovate on execution will be the ones to shape the next phase of the market.
The teams that recognize this momentum, experiment faster than anyone else, and innovate on execution will be the ones to shape the next phase of the AI agent market.
For founders building in the MCP and A2A ecosystem, the window of opportunity remains wide open. The infrastructure is nascent, the standards are still being defined, and the market is ready for innovative solutions that bridge the gap between protocol potential and practical implementation.
To all the founders experimenting with new forms of AI connection and collaborationâwe see the importance of your work. The future of productive AI depends on innovators who are willing to tackle the hard problems of making agents truly collaborative and enterprise-ready.
The future belongs to those who can turn these foundational protocols into platforms that make AI agents genuinely useful in real-world workflows.
đ Recommended Reading
LLM Market Structure 2025:
Natural Monopoly vs. Competitive Landscape Analysis
Understanding the broader AI infrastructure landscape?
Explore how LLM market dynamics are shaping the foundation that MCP and A2A protocols build upon.
from the Kakao Ventures team.