
Introduction
Open, cooperative AI agents are moving from white-papers to production systems. On 7 May 2025, Microsoft announced first-class support for Google’s Agent2Agent (A2A) protocol inside Azure AI Foundry and Copilot Studio, and pledged engineering muscle to A2A’s open-source working group. This post walks through what A2A is, why Microsoft’s backing matters, and how A2A dovetails perfectly with the already-adopted Model Context Protocol (MCP).
What is A2A?
Agent2Agent is an open, vendor-neutral specification that lets autonomous AI agents:
- discover each other
- exchange goals and task state
- invoke actions and return results
- travel through enterprise-grade auth (mutual TLS, OAuth, Entra ID)
Google released the spec in April 2025 with more than 50 contributing partners ranging from Salesforce to LangChain. Think of A2A as “HTTP for software robots.”
Why Microsoft’s Adoption Matters
- Instant scale – Azure AI Foundry serves 70 000+ enterprise dev teams, and Copilot Studio is live in 230 000 organisations (90 % of the Fortune 500). Shipping A2A here makes the protocol a default choice overnight.
- Cross-cloud workflows – A Foundry agent scheduling Outlook meetings can now delegate the email copywriting to a Google-hosted agent, with both calls running through Entra safeguards.
- Industry signal – Two hyperscalers on the same standard accelerates vendor uptake. Expect “A2A-ready” badges on everything from CRM plugins to RPA bots by year-end.
A2A + MCP: Two Layers, One Stack
Layer | Purpose | Example |
---|---|---|
MCP | Gives individual agents structured access to tools & data sources. | Copilot grabs SharePoint docs with one MCP call. |
A2A | Lets multiple agents coordinate tasks across clouds. | That same Copilot asks a Google Drive agent to summarise a PDF. |
Google itself describes A2A as a complement to Anthropic’s MCP, not a competitor. Microsoft’s blog echoes the sentiment, calling both protocols “important steps toward composable, intelligent systems.”
Getting Hands-On (Preview)
Prerequisites: Azure subscription, Foundry workspace, Python 3.11
Step 1 — Enable the preview
bashCopyEdit# Azure CLI
az extension add --name foundry-agent
az foundry agent update --name myFoundryAgent --enable-a2a true
Step 2 — Clone Microsoft’s Semantic Kernel sample
bashCopyEditgit clone https://github.com/microsoft/semantic-kernel-samples.git
cd semantic-kernel-samples/agents/a2a_travel
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
python main.py # two local agents plan a trip together
Step 3 — Wire an external agent
Create an agent-card.json
describing your remote agent’s capabilities, expose an HTTPS endpoint that speaks A2A, and register the card in Foundry:
bashCopyEditaz foundry agent card upload --file agent-card.json
Foundry handles mTLS, policy checks, and auditing automatically.
Business Benefits
- Less glue code – Standard messages mean you can swap LangChain ↔ Semantic Kernel without rewriting adapters.
- Governance by default – Every A2A hop travels through Entra ID, Content Safety filters, and full audit trails.
- Faster ROI – 65 % of companies are already piloting AI agents, up from 37 % just last quarter. Analysts peg the agent market at $7.8 B today → $52.6 B by 2030.
- Future-proofing – Open protocols sidestep vendor lock-in; you’re free to mix Anthropic, OpenAI, and Gemini models under the same roof.
Technical Considerations
- Security – Follow the MAESTRO threat-model guidelines for A2A deployments.
- Latency budget – Chaining many agents increases round-trips; use parallel delegations where possible.
- Observability – Pipe A2A traces into OpenTelemetry to visualise cross-agent spans.
Resources
- Agent2Agent Spec & Roadmap – google.github.io/A2A Google GitHub
- Microsoft Launch Blog – “Empowering multi-agent apps with A2A” Microsoft
- Semantic Kernel A2A Sample – devblogs.microsoft.com Microsoft
- Model Context Protocol Docs – Anthropic MCP GitHub InfoQ
Conclusion
A2A gives agents a shared language; MCP gives them reliable memory and tool-use. With Microsoft now all-in on both layers, 2025 is shaping up to be the year we move from siloed “mega-bots” to orchestrated teams of small, specialised agents. Start experimenting today—because interoperable agents won’t just be a nice-to-have, they’ll be table stakes for the next generation of AI-powered software.