This is an experimental feature and will evolve based on community feedback.
The RequirementAgent
is a declarative AI agent that combines language models, tools, and execution requirements to create predictable, controlled behavior across different LLMs.
Why Use Requirement Agent?
Building agents that work reliably across multiple LLMs is difficult. Most agents are tightly tuned to specific models, with rigid prompts that cause models to misinterpret instructions, skip tools, or hallucinate facts.
RequirementAgent
provides a declarative framework for designing agents that strikes a balance between flexibility and control. It allows for agent behavior that is both predictable and adaptable, without the complexity and limitations of more rigid systems.
Core Concepts
Everything is a Tool
- Data retrieval, web search, reasoning, and final answers are all implemented as tools
- This structure ensures valid responses with structured outputs and eliminates parsing errors
Requirements Control Tool Usage
You can define rules that control when and how tools are used:
- “Only use tool A after tool B has been called”
- “Tool D must be used exactly twice, but not two times in a row”
- “Tool E can only be used after both tool A and tool B have been used”
- “Tool F must be called immediately after tool D”
- “You must call tool C at least once before giving a final answer”
Quickstart
This example demonstrates how to create an agent with enforced tool execution order:
from beeai_framework.agents.experimental import RequirementAgent
from beeai_framework.agents.experimental.requirements.conditional import ConditionalRequirement
# Create an agent that plans activities based on weather and events
agent = RequirementAgent(
llm=ChatModel.from_name("ollama:granite3.3:8b"),
tools=[
ThinkTool(), # For reasoning
OpenMeteoTool(), # For weather data
DuckDuckGoSearchTool() # For event search
],
instructions="Plan activities for a given destination based on current weather and events.",
requirements=[
# Force thinking first
ConditionalRequirement(ThinkTool, force_at_step=1),
# Search only after getting weather, at least once
ConditionalRequirement(DuckDuckGoSearchTool, only_after=[OpenMeteoTool], min_invocations=1),
],
)
# Run with execution logging
response = await agent.run("What to do in Boston?").middleware(GlobalTrajectoryMiddleware())
print(response.result.text)
This agent will:
- First use
ThinkTool
to reason about the request
- Check weather using
OpenMeteoTool
- Search for events using
DuckDuckGoSearchTool
(at least once)
- Provide recommendations based on the gathered information
➡️ Check out more examples (multi-agent, custom requirements, …).
Requirements and Rules
Requirements are functions that evaluate the current agent state and produce a list of rules. The system evaluates requirements before each LLM call to determine which tools are available.
Rules define specific constraints on tool usage. Each rule contains the following attributes:
Attribute | Description |
---|
target | The tool the rule applies to |
allowed | Whether the tool can be used |
hidden | Whether the tool’s definition is visible to the agent |
prevent_stop | Whether the rule blocks termination |
forced | Whether the tool must be invoked |
When requirements generate conflicting rules, the system applies this precedence:
- Forbidden takes precedence: If any rule forbids a tool, it cannot be used
- Highest priority forced rule wins: Among forced rules, the highest priority requirement determines the forced tool
- Multiple prevention rules combine: All
prevent_stop
rules are respected
Requirements are evaluated on every iteration before calling the LLM. Design them to be efficient for frequent execution.
Start with a single requirement and add more as needed.
Conditional Requirement
The conditional requirement controls when tools can be used based on specific conditions.
Force Execution Order
This example forces the agent to use ThinkTool
for reasoning followed by DuckDuckGoSearchTool
to retrieve data. This trajectory ensures that even a small model can arrive at the correct answer by preventing it from skipping tool calls entirely.
RequirementAgent(
llm=ChatModel.from_name("ollama:granite3.3:8b"),
tools=[ThinkTool(), DuckDuckGoSearchTool()],
requirements=[
ConditionalRequirement(ThinkTool, force_at_step=1), # Force ThinkTool at the first step
ConditionalRequirement(DuckDuckGoSearchTool, force_at_step=2), # Force DuckDuckGo at the second step
],
)
Creating a ReAct Agent
A ReAct Agent (Reason and Act) follows this trajectory:
Think -> Use a tool -> Think -> Use a tool -> Think -> ... -> End
You can achieve this by forcing the execution of the Think
tool after every other tool:
RequirementAgent(
llm=ChatModel.from_name("ollama:granite3.3:8b"),
tools=[ThinkTool(), WikipediaTool(), OpenMeteoTool()],
requirements=[ConditionalRequirement(ThinkTool, force_at_step=1, force_after=[OpenMeteoTool, WikipediaTool])],
)
For a more general approach, use ConditionalRequirement(ThinkTool, force_at_step=1, force_after=Tool, can_be_used_in_row=False)
, where the option can_be_used_in_row=False
prevents ThinkTool
from being used multiple times in a row.
ReAct Agent + Custom Conditions
You may want an agent that works like ReAct but skips the “reasoning” step under certain conditions. This example uses the priority option to tell the agent to send an email after creating an order, while calling ThinkTool
after every other action.
RequirementAgent(
llm=ChatModel.from_name("ollama:granite3.3:8b"),
tools=[ThinkTool(), retrieve_basket(), create_order(), send_email()],
requirements=[
ConditionalRequirement(ThinkTool, force_at_step=1, force_after=Tool, priority=10),
ConditionalRequirement(send_email, only_after=create_order, force_after=create_order, priority=20, max_invocations=1),
],
)
Prevent Early Termination
The following requirement prevents the agent from providing a final answer before it calls the my_tool
.
ConditionalRequirement(my_tool, min_invocations=1)
Complete Parameter Reference
ConditionalRequirement(
target_tool, # Tool class, instance, or name (can also be specified as `target=...`)
name="", # (optional) Name, useful for logging
only_before=[...], # (optional) Disable target_tool after any of these tools are called
only_after=[...], # (optional) Disable target_tool before all these tools are called
force_after=[...], # (optional) Force target_tool execution immediately after any of these tools are called
min_invocations=0, # (optional) Minimum times the tool must be called before agent can stop
max_invocations=10, # (optional) Maximum times the tool can be called before being disabled
force_at_step=1, # (optional) Step number at which the tool must be invoked
only_success_invocations=True, # (optional) Whether 'force_at_step' counts only successful invocations
priority=10, # (optional) Higher number means higher priority for requirement enforcement
can_be_used_in_row=True, # (optional) Whether the tool can be invoked twice in a row
enabled=True, # (optional) Whether to skip this requirement’s execution
custom_checks=[
# (optional) Custom callbacks; all must pass for the tool to be used
lambda state: any('weather' in msg.text for msg in state.memory.message if isinstance(msg, UserMessage)),
lambda state: state.iteration > 0,
],
)
Pass a class instance (e.g., weather_tool = ...
) or a class (OpenMeteoTool
) rather than a tool’s name. Some tools may have dynamically generated names.
The reasoner throws an error if it detects contradictory rules or a rule without an existing target.
Ask Permission Requirement
Some tools may be expensive to run or have destructive effects. For these tools, you may want to get approval from an external system or directly from the user.
RequirementAgent(
llm=ChatModel.from_name("ollama:granite3.3:8b"),
tools=[get_data, remove_data, update_data],
requirements=[AskPermissionRequirement([remove_data, get_data])]
)
Using a Custom Handler
async def handler(tool: Tool, input: dict[str, Any]) -> bool:
# your implementation
return True
AskPermissionRequirement(..., handler=handler)
Complete Parameter Reference
AskPermissionRequirement(
include=[...], # (optional) List of targets (tool name, instance, or class) requiring explicit approval
exclude=[...], # (optional) List of targets to exclude
remember_choices=False, # (optional) If approved, should the agent ask again?
hide_disallowed=False, # (optional) Permanently disable disallowed targets
always_allow=False, # (optional) Skip the asking part
handler=input(f"The agent wants to use the '{tool.name}' tool.\nInput: {tool_input}\nDo you allow it? (yes/no): ").strip().startswith("yes") # (optional) Custom handler, can be async
)
If no targets are specified, permission is required for all tools.
Custom Requirement
You can create a custom requirement by implementing the base Requirement class.
The Requirement class has the following lifecycle:
- An external caller invokes init(tools):
tools
is a list of available tools for a given agent.
- This method is called only once, at the very beginning.
- It is an ideal place to introduce hooks, validate the presence of certain tools, etc.
- The return type of the
init
method is None
.
- An external caller invokes run(state):
state
is a generic parameter; in RequirementAgent
, it refers to the RequirementAgentRunState
class.
- This method is called multiple times, typically before an LLM call.
- The return type of the
run
method is a list of rules.
Premature Stop Requirement
This example demonstrates how to write a requirement that prevents the agent from answering if the question contains a specific phrase:
import asyncio
from beeai_framework.agents.experimental import RequirementAgent
from beeai_framework.agents.experimental.requirements import Requirement, Rule
from beeai_framework.agents.experimental.requirements.requirement import run_with_context
from beeai_framework.agents.experimental.types import RequirementAgentRunState
from beeai_framework.backend import AssistantMessage, ChatModel
from beeai_framework.context import RunContext
from beeai_framework.middleware.trajectory import GlobalTrajectoryMiddleware
from beeai_framework.tools.search.duckduckgo import DuckDuckGoSearchTool
class PrematureStopRequirement(Requirement[RequirementAgentRunState]):
"""Prevents the agent from answering if a certain phrase occurs in the conversation"""
name = "premature_stop"
def __init__(self, phrase: str) -> None:
super().__init__()
self._phrase = phrase
self._priority = 100 # (optional), default is 10
@run_with_context
async def run(self, input: RequirementAgentRunState, context: RunContext) -> list[Rule]:
last_message = input.memory.messages[-1]
if self._phrase in last_message.text:
await input.memory.add(
AssistantMessage(
"The final answer is that the system policy does not allow me to answer this type of questions.",
{"tempMessage": True}, # the message gets removed in the next iteration
)
)
return [Rule(target="final_answer", forced=True)]
else:
return []
async def main() -> None:
agent = RequirementAgent(
llm=ChatModel.from_name("ollama:granite3.3:8b"),
tools=[DuckDuckGoSearchTool()],
requirements=[PrematureStopRequirement("value of x")],
)
prompt = "y = 2x + 4, what is the value of x?"
print("👤 User: ", prompt)
response = await agent.run(prompt).middleware(GlobalTrajectoryMiddleware())
print("🤖 Agent: ", response.result.text)
if __name__ == "__main__":
asyncio.run(main())
Examples