Overview
An AI Agent is a system built on language models (LLMs or SLMs) that can solve complex tasks through structured reasoning and autonomous or human-assisted actions. The BeeAI Framework serves as the orchestration layer that enables agents to do this and more:- Coordinate with LLMs: Manages communication between your agent and language models
- Tool Management: Provides agents with access to external tools and handles their execution
- Response Processing: Processes and validates tool outputs and model responses
- Memory Management: Maintains conversation context and state across interactions
- Error Handling: Manages retries, timeouts, and graceful failure recovery
- Event Orchestration: Emits detailed events for monitoring and debugging agent behavior
Dive deeper into the concepts behind AI agents in this research article from IBM.
Supported in Python and TypeScript.
Customizing Agent Behavior
You can customize your agent’s behavior in several key ways:1. Configuring the Language Model Backend
The backend system manages your connection to different language model providers. The BeeAI Framework supports multiple LLM providers through a unified interface. Learn more about available backends and how to set their parameters in our backend documentation.- Unified Interface: Work with different providers using the same API
- Model Parameters: Configure temperature, max tokens, and other model settings
- Provider Support: OpenAI, Anthropic, Ollama, Groq, and more
- Local & Cloud: Support for both local and cloud-hosted models
2. Setting the System Prompt
The system prompt defines your agent’s behavior, personality, and capabilities. You can configure this through several parameters when initializing an agent:role
: Defines the agent’s persona and primary functioninstructions
: List of specific behavioral guidelinesnotes
: Additional context or special considerationsname
anddescription
: Help identify the agent’s purpose and are helpful when using theHandoffTool
,Serve
module, and when you need to access agent metadata viaagent.meta
3. Configuring Agent Run Options
When executing an agent, you can provide additional options to guide its behavior and execution settings:Setting Execution Settings and Guiding Agent Run Behavior
expected_output
: Guides the agent toward a specific unstructured or structured output format.output_structured
is defined only whenexpected_output
is a Pydantic model or a JSON schema. However, the text representation is always available viaresponse.output
.backstory
: Provides additional context to help the agent understand the user’s situationtotal_max_retries
: Controls the total number of retry attempts across the entire agent executionmax_retries_per_step
: Limits retries for individual steps (like tool calls or model responses)max_iterations
: Sets the maximum number of reasoning cycles the agent can perform
The are defualts set for
max_iterations
, total_max_retries
, and max_retries_per_step
, but you can override them by setting your own preferences.4. Adding Tools
Enhance your agent’s capabilities by providing it with tools to interact with external systems. Learn more about beeai provided tools and creating custom tools in our tools documentation.5. Configuring Memory
Memory allows your agent to maintain context across multiple interactions. Different memory types serve different use cases. Learn more about our built in options in the memory documentation.Additional Agent Options
- Observability & Debugging: Monitor agent behavior with detailed event tracking and logging systems
- MCP (Model Context Protocol): Connect to external services and data sources
- A2A (Agent-to-Agent): Enable multi-agent communication and coordination
- Caching: Improve performance by caching LLM responses and tool outputs
- Event System: Build reactive applications using the comprehensive emitter framework
- RAG Integration: Connect your agents to knowledge bases and document stores
- Serialization: Save and restore agent state for persistence and deployment
- Error Handling: Implement robust error recovery and debugging strategies
Agent Types
BeeAI Framework provides several agent implementations:Upcoming change:
The Requirement agent will become the primary supported agent. The ReAct and tool-calling agents will not be actively supported.
The Requirement agent will become the primary supported agent. The ReAct and tool-calling agents will not be actively supported.
Requirement Agent
This is the recommended agent. Currently only supported in Python.
Python
ReAct Agent
The ReAct Agent is available in both Python and TypeScript, but no longer actively supported.
During execution, the agent emits partial updates as it generates each line, followed by complete updates. Updates follow a strict order: first all partial updates for “thought,” then a complete “thought” update, then moving to the next component.
Tool Calling Agent
The Tool Calling Agent is available in both Python and TypeScript, but no longer actively supported.
Custom Agent
For advanced use cases, you can create your own agent implementation by extending theBaseAgent
class.
Agent Workflows
Upcoming change:
Workflows are under construction to support more dynamic multi-agent patterns. If you’d like to participate in shaping the vision, contribute to the discussion in this V2 Workflow Proposal.
Workflows are under construction to support more dynamic multi-agent patterns. If you’d like to participate in shaping the vision, contribute to the discussion in this V2 Workflow Proposal.