Skip to main content
9 – 17 UHR +49 8031 3508270 LUITPOLDSTR. 9, 83022 ROSENHEIM
DE / EN

AI Innovation Through Interoperability – MCP, ACP, and A2A as Enablers in B2B LLM Context

Tobias Jonas Tobias Jonas | | 5 min read

In today’s digital world, CEOs and executives face the challenge of understanding Artificial Intelligence not just as isolated buzzwords, but as an integral part of modern business processes. Large Language Models (LLMs) like GPT and Claude are already revolutionizing how companies process data and make decisions. But to achieve real added value, these systems need to process more than just text-based input. They need comprehensive context, external tools, and exchange with other AI agents. This is exactly where three new communication protocols come in: MCP from Anthropic, ACP from IBM, and A2A from Google.

Why Standardized Protocols for AI Are Important

Simple interaction with a chatbot or voice assistant is no longer sufficient when it comes to complex B2B problems. To access real-time data, connect APIs, or have multiple AI agents communicate with each other requires standardized interfaces and protocols. These protocols provide the blueprint for how systems talk to each other, exchange information, and coordinate tasks. In this article, we take a look at MCP, ACP, and A2A, three current approaches from Anthropic, IBM, and Google that aim to optimize the integration of AI systems in the enterprise context.

1. MCP (Model Context Protocol) – The Foundation for Connecting External Data

MCP, developed by Anthropic, focuses on providing LLMs with additional context and information. Essentially, MCP enables standardized connection of external APIs and data sources so that a language model is not limited to its internal knowledge base but can also retrieve current and specific company data. Some important features of MCP:

  • Transport & Protocol: MCP uses HTTP (SSE) and JSON (locally also STDIO) as the basic standard, which integrates seamlessly into existing systems. Extensions via gRPC are possible if more powerful communication channels are needed.
  • Scope: It is primarily aimed at the connection between LLMs and external data sources such as databases, APIs, and other tools, but is flexible enough to support both cloud-based and local environments.
  • Ideal for: Companies that first want to develop their own B2B agents. With MCP, LLMs can work more robustly and context-sensitively by retrieving additional information in real-time or executing actions.
  • Practical implementation: Our CompanyGPT also supports MCP. This enables companies to develop their own solutions that meet the specific requirements of their business processes.

Despite all the advantages, there are still open topics, such as permission management for data access within MCP. These challenges are part of the ongoing development, which already covers many B2B use cases.

2. ACP (Agent Communication Protocol) – The Future of Internal Agent Communication

IBM has developed ACP, an approach that addresses more than just data exchange between LLMs. ACP (Agent Communication Protocol) aims to efficiently enable heterogeneous agents to communicate in an enterprise environment.

  • Transport & Protocol: ACP also builds on proven technologies JSON-RPC over HTTP/WebSockets. This combination provides a fast and standardized platform that supports both local networks and cloud-based environments.
  • Scope: ACP focuses strongly on agent-to-agent communication, a central prerequisite when multiple specialized AI services need to be coordinated.
  • Ideal for: Companies that want to establish complex, internal AI ecosystems and thereby network different agents (e.g., for sales, logistics, or customer service).
  • Practical benefit: IBM, as a pioneer in the industry, uses ACP in projects like BeeAI, opening up innovative use cases in the enterprise sector.

Although ACP is promising and is continuously being improved by IBM Research, the standard is currently still under development and therefore requires a certain degree of adaptation and integration into existing systems.

3. A2A (Agent-to-Agent) – The Cloud-Native Connection Between AI Agents

Google has developed its A2A project, an approach specifically targeting the communication of AI agents in a cloud-based environment. Here are some central aspects of A2A:

  • Transport & Protocol: A2A also relies on HTTP (REST) and JSON, supplemented by gRPC for more powerful interactions. This setup allows agents in the cloud to be quickly and efficiently connected.
  • Scope: The focus is on seamless integration of agents, which is particularly advantageous in cloud-native architectures where data and processes continuously flow between different systems.
  • Ideal for: Companies that already rely heavily on cloud services and want to integrate multiple specialized AI agents, for example in Vertex AI Agent Builder.
  • Practical application: A2A is still in an early phase, but Google is investing heavily in standardization to guarantee high scalability and interoperability.

The challenge with A2A is that many details around standard definition and security aspects still need to be further elaborated.

Comparison: MCP as a Solid Entry into AI Interoperability

When companies consider how to optimize their existing B2B processes through the use of LLMs and AI agents, the question often arises: Which protocol is the right one? While ACP and A2A are particularly interesting for the later integration of multiple, interoperable AI agents, MCP already scores in the first step. With MCP, companies can first develop and integrate their own B2B agents to realize use cases where LLMs obtain external data and thus act significantly smarter. OpenAI has also recognized this potential and is integrating MCP into ChatGPT.

Our CompanyGPT, based on open-source technologies, already supports the MCP approach and enables our customers to implement customized, context-sensitive solutions. This is especially important in a phase where cloud-native GPT solutions offer enormous potential, but at the same time still face challenges, such as permission management for data access.

Tobias Jonas
Written by

Tobias Jonas

Co-CEO, M.Sc.

Tobias Jonas, M.Sc. ist Mitgründer und Co-CEO der innFactory AI Consulting GmbH. Er ist ein führender Innovator im Bereich Künstliche Intelligenz und Cloud Computing. Als Co-Founder der innFactory GmbH hat er hunderte KI- und Cloud-Projekte erfolgreich geleitet und das Unternehmen als wichtigen Akteur im deutschen IT-Sektor etabliert. Dabei ist Tobias immer am Puls der Zeit: Er erkannte früh das Potenzial von KI Agenten und veranstaltete dazu eines der ersten Meetups in Deutschland. Zudem wies er bereits im ersten Monat nach Veröffentlichung auf das MCP Protokoll hin und informierte seine Follower am Gründungstag über die Agentic AI Foundation. Neben seinen Geschäftsführerrollen engagiert sich Tobias Jonas in verschiedenen Fach- und Wirtschaftsverbänden, darunter der KI Bundesverband und der Digitalausschuss der IHK München und Oberbayern, und leitet praxisorientierte KI- und Cloudprojekte an der Technischen Hochschule Rosenheim. Als Keynote Speaker teilt er seine Expertise zu KI und vermittelt komplexe technologische Konzepte verständlich.

LinkedIn