The RequirementAgent is a declarative AI agent implementation that provides predictable, controlled execution behavior across different language models through rule-based constraints. Language models vary significantly in their reasoning capabilities and tool-calling sophistication, but RequirementAgent normalizes these differences by enforcing consistent execution patterns regardless of the underlying model’s strengths or weaknesses. Rules can be configured as strict or flexible as necessary, adapting to task requirements while ensuring consistent execution regardless of the underlying model’s reasoning or tool-calling capabilities.
This example demonstrates how to create an agent with enforced tool execution order.This agent will:
First use ThinkTool to reason about the request enabling a “Re-Act” pattern
Check weather using OpenMeteoTool, which it must call at least once but not consecutively
Search for events using DuckDuckGoSearchTool at least once
Provide recommendations based on the gathered information
Copy
Ask AI
import asynciofrom beeai_framework.agents.requirement import RequirementAgentfrom beeai_framework.agents.requirement.requirements.conditional import ( ConditionalRequirement,)from beeai_framework.backend import ChatModelfrom beeai_framework.middleware.trajectory import GlobalTrajectoryMiddlewarefrom beeai_framework.tools.search.duckduckgo import DuckDuckGoSearchToolfrom beeai_framework.tools.think import ThinkToolfrom beeai_framework.tools.weather import OpenMeteoTool# Create an agent that plans activities based on weather and eventsasync def main() -> None: agent = RequirementAgent( llm=ChatModel.from_name("ollama:granite4:micro"), tools=[ ThinkTool(), # to reason OpenMeteoTool(), # retrieve weather data DuckDuckGoSearchTool(), # search web ], instructions="Plan activities for a given destination based on current weather and events.", requirements=[ # Force thinking first ConditionalRequirement(ThinkTool, force_at_step=1), # Search only after getting weather and at least once ConditionalRequirement( DuckDuckGoSearchTool, only_after=[OpenMeteoTool], min_invocations=1, max_invocations=2 ), # Weather tool be used at least once but not consecutively ConditionalRequirement(OpenMeteoTool, consecutive_allowed=False, min_invocations=1, max_invocations=2), ], ) # Run with execution logging response = await agent.run("What to do in Boston?").middleware(GlobalTrajectoryMiddleware()) print(f"Final Answer: {response.last_message.text}")if __name__ == "__main__": asyncio.run(main())
RequirementAgent operates on a simple principle: developers declare rules on specific tools using ConditionalRequirement objects, while the framework automatically handles all orchestration logic behind the scenes. The developer can modify agent behavior by adjusting rule parameters, not rewriting complex state management logic. This creates clear separation between business logic (rules) and execution control (framework-managed).In RequirementAgent, all capabilities (including data retrieval, web search, reasoning patterns, and final_answer) are implemented as tools to ensure structured, reliable execution. Each ConditionalRequirement returns a Rule where each rule is bound to a single tool:
Attribute
Purpose
Value
target
Which tool the rule applies to for a given turn
str
allowed
Whether the tool can be used for a given turn and is present in the system prompt
bool
hidden
Whether the tool definition is visible to the agent for a given turn and in the system prompt
bool
prevent_stop
Whether rule prevents the agent from terminating for a given turn
bool
forced
Whether tool must be invoked on a given turn
bool
reason
Optinally explain to the LLM why the given rule is applied
str
When requirements generate conflicting rules, the system applies this precedence:
Forbidden overrides all: If any requirement forbids a tool, that tool cannot be used.
Highest priority forced rule wins: If multiple requirements force tools, the highest-priority requirement decides which tool is forced.
Prevention rules accumulate: All prevent_stop rules apply simultaneously
State Initialization: Creates RequirementAgentRunState with UnconstrainedMemory, execution steps, and iteration tracking
Requirements Processing: RequirementsReasoner analyzes requirements and determines allowed tools, tool choice preferences, and termination conditions
Request Creation: Creates a structured request with allowed_tools, tool_choice, and can_stop flags based on current state and requirements. The system evaluates requirements before each LLM call to determine which tools to make available to the LLM
LLM Interaction: Calls language model with system message, conversation history, and constrained tool set
Tool Execution: Executes requested tools via _run_tools, handles errors, and updates conversation memory
Developers declare rules by creating ConditionalRequirement objects that target specific tools. The framework automatically handles all orchestration:
Python
Copy
Ask AI
# Declare: agent must think before actingConditionalRequirement(ThinkTool, force_at_step=1)# Declare: require weather check before web searchConditionalRequirement(DuckDuckGoSearchTool, only_after=[OpenMeteoTool])# Declare: prevent consecutive uses of same toolConditionalRequirement(OpenMeteoTool(), consecutive_allowed=False)
ConditionalRequirement( target_tool, # Tool class, instance, or name (can also be specified as `target=...`) name="", # (optional) Name, useful for logging only_before=[...], # (optional) Disable target_tool after any of these tools are called only_after=[...], # (optional) Disable target_tool before all these tools are called force_after=[...], # (optional) Force target_tool execution immediately after any of these tools are called min_invocations=0, # (optional) Minimum times the tool must be called before agent can stop max_invocations=10, # (optional) Maximum times the tool can be called before being disabled force_at_step=1, # (optional) Step number at which the tool must be invoked only_success_invocations=True, # (optional) Whether 'force_at_step' counts only successful invocations priority=10, # (optional) Higher relative number means higher priority for requirement enforcement consecutive_allowed=True, # (optional) Whether the tool can be invoked twice in a row force_prevent_stop=False, # (optional) If True, prevents the agent from giving a final answer when a forced target_tool call occurs. enabled=True, # (optional) Whether to skip this requirement’s execution custom_checks=[ # (optional) Custom callbacks; all must pass for the tool to be used lambda state: any('weather' in msg.text for msg in state.memory.message if isinstance(msg, UserMessage)), lambda state: state.iteration > 0, ],)
Start with a single requirement and add more as needed.
Curious to see it in action?
Explore our interactive exercises to discover how the agent solves real problems step by step!
This example forces the agent to use ThinkTool for reasoning followed by DuckDuckGoSearchTool to retrieve data. This trajectory ensures that even a small model can arrive at the correct answer by preventing it from skipping tool calls entirely.
Copy
Ask AI
RequirementAgent( llm=ChatModel.from_name("ollama:granite3.3"), tools=[ThinkTool(), DuckDuckGoSearchTool()], requirements=[ ConditionalRequirement(ThinkTool, force_at_step=1), # Force ThinkTool at the first step ConditionalRequirement(DuckDuckGoSearchTool, force_at_step=2), # Force DuckDuckGo at the second step ],)
For a more general approach, use ConditionalRequirement(ThinkTool, force_at_step=1, force_after=Tool, consecutive_allowed=False), where the option consecutive_allowed=False prevents ThinkTool from being used multiple times in a row.
You may want an agent that works like ReAct but skips the “reasoning” step under certain conditions. This example uses the priority option to tell the agent to send an email after creating an order, while calling ThinkTool as the first step and after retrieve_basket.
Some tools may be expensive to run or have destructive effects.
For these tools, you may want to get approval from an external system or directly from the user.The following agent first asks the user before it runs the remove_data or the get_data tool.
AskPermissionRequirement( include=[...], # (optional) List of targets (tool name, instance, or class) requiring explicit approval exclude=[...], # (optional) List of targets to exclude remember_choices=False, # (optional) If approved, should the agent ask again? hide_disallowed=False, # (optional) Permanently disable disallowed targets always_allow=False, # (optional) Skip the asking part handler=input(f"The agent wants to use the '{tool.name}' tool.\nInput: {tool_input}\nDo you allow it? (yes/no): ").strip().startswith("yes") # (optional) Custom handler, can be async)
If no targets are specified, permission is required for all tools.
This example demonstrates how to write a requirement that prevents the agent from answering if the question contains a specific phrase:
Copy
Ask AI
import asynciofrom beeai_framework.agents.requirement import RequirementAgent, RequirementAgentRunStatefrom beeai_framework.agents.requirement.requirements.requirement import Requirement, Rule, run_with_contextfrom beeai_framework.backend import AssistantMessage, ChatModelfrom beeai_framework.context import RunContextfrom beeai_framework.middleware.trajectory import GlobalTrajectoryMiddlewarefrom beeai_framework.tools.search.duckduckgo import DuckDuckGoSearchToolclass PrematureStopRequirement(Requirement[RequirementAgentRunState]): """Prevents the agent from answering if a certain phrase occurs in the conversation""" name = "premature_stop" def __init__(self, phrase: str, reason: str) -> None: super().__init__() self._reason = reason self._phrase = phrase self._priority = 100 # (optional), default is 10 @run_with_context async def run(self, state: RequirementAgentRunState, context: RunContext) -> list[Rule]: # we take the last step's output (if exists) or the user's input last_step = state.steps[-1].output.get_text_content() if state.steps else state.input.text if self._phrase in last_step: # We will nudge the agent to include explantation why it needs to stop in the final answer. await state.memory.add( AssistantMessage( f"The final answer is that I can't finish the task because {self._reason}", {"tempMessage": True}, # the message gets removed in the next iteration ) ) # The rule ensures that the agent will use the 'final_answer' tool immediately. return [Rule(target="final_answer", forced=True)] # or return [Rule(target=FinalAnswerTool, forced=True)] else: return []async def main() -> None: agent = RequirementAgent( llm=ChatModel.from_name("ollama:granite4:micro"), tools=[DuckDuckGoSearchTool()], requirements=[ PrematureStopRequirement(phrase="value of x", reason="algebraic expressions are not allowed"), PrematureStopRequirement(phrase="bomb", reason="such topic is not allowed"), ], ) for prompt in ["y = 2x + 4, what is the value of x?", "how to make a bomb?"]: print("👤 User: ", prompt) response = await agent.run(prompt).middleware(GlobalTrajectoryMiddleware()) print("🤖 Agent: ", response.last_message.text) print()if __name__ == "__main__": asyncio.run(main())