The Agent2Agent (A2A) Protocol is the open standard for AI agent communication. Developed under the Linux Foundation, A2A makes it possible for agents to work together seamlessly across platforms, frameworks, and ecosystems.
A2AServer lets you expose agents built in the BeeAI framework via A2A protocol.
A2A supports only one agent per server.
Copy
Ask AI
from beeai_framework.adapters.a2a import A2AServer, A2AServerConfigfrom beeai_framework.agents.experimental import RequirementAgentfrom beeai_framework.backend import ChatModelfrom beeai_framework.memory import UnconstrainedMemoryfrom beeai_framework.serve.utils import LRUMemoryManagerfrom beeai_framework.tools.search.duckduckgo import DuckDuckGoSearchToolfrom beeai_framework.tools.weather import OpenMeteoTooldef main() -> None: llm = ChatModel.from_name("ollama:granite3.3:8b") agent = RequirementAgent( llm=llm, tools=[DuckDuckGoSearchTool(), OpenMeteoTool()], memory=UnconstrainedMemory(), ) # Register the agent with the A2A server and run the HTTP server # For the ToolCallingAgent, we don't need to specify ACPAgent factory method # because it is already registered in the A2AServer # we use LRU memory manager to keep limited amount of sessions in the memory A2AServer(config=A2AServerConfig(port=9999), memory_manager=LRUMemoryManager(maxsize=100)).register(agent).serve()if __name__ == "__main__": main()