Tools extend agent capabilities beyond text processing, enabling interaction with external systems and data sources. They act as specialized modules that extend the agent’s abilities, allowing it to interact with external systems, access information, and execute actions in response to user queries.
The true power of tools emerges when integrating them with agents. Tools extend the agent’s capabilities, allowing it to perform actions beyond text generation:
Use the DuckDuckGo search tool to retrieve real-time search results from across the internet, including news, current events, or content from specific websites or domains.
Use the OpenMeteo tool to retrieve real-time weather forecasts including detailed information on temperature, wind speed, and precipitation. Access forecasts predicting weather up to 16 days in the future and archived forecasts for weather up to 30 days in the past. Ideal for obtaining up-to-date weather predictions and recent historical weather trends.
Use the Wikipedia tool to retrieve detailed information from Wikipedia.org on a wide range of topics, including famous individuals, locations, organizations, and historical events. Ideal for obtaining comprehensive overviews or specific details on well-documented subjects. May not be suitable for lesser-known or more recent topics. The information is subject to community edits which can be inaccurate.
Use the VectorStoreSearchTool to perform semantic search against pre-populated vector stores. This tool enables agents to retrieve relevant documents from knowledge bases using semantic similarity, making it ideal for RAG (Retrieval-Augmented Generation) applications and knowledge-based question answering.
This tool requires a pre-populated vector store. Vector store population (loading and chunking documents) is typically handled offline in production applications.
Python
Copy
Ask AI
import asyncioimport osfrom beeai_framework.agents.requirement import RequirementAgentfrom beeai_framework.backend import ChatModelfrom beeai_framework.backend.document_loader import DocumentLoaderfrom beeai_framework.backend.embedding import EmbeddingModelfrom beeai_framework.backend.text_splitter import TextSplitterfrom beeai_framework.backend.vector_store import VectorStorefrom beeai_framework.memory import UnconstrainedMemoryfrom beeai_framework.middleware.trajectory import GlobalTrajectoryMiddlewarefrom beeai_framework.tools import Toolfrom beeai_framework.tools.search.retrieval import VectorStoreSearchToolPOPULATE_VECTOR_DB = TrueVECTOR_DB_PATH_4_DUMP = "" # Set this path for persistencyasync def setup_vector_store() -> VectorStore | None: """ Setup vector store with BeeAI framework documentation. """ embedding_model = EmbeddingModel.from_name("watsonx:ibm/slate-125m-english-rtrvr-v2", truncate_input_tokens=500) # Load existing vector store if available if VECTOR_DB_PATH_4_DUMP and os.path.exists(VECTOR_DB_PATH_4_DUMP): print(f"Loading vector store from: {VECTOR_DB_PATH_4_DUMP}") from beeai_framework.adapters.beeai.backend.vector_store import TemporalVectorStore preloaded_vector_store: VectorStore = TemporalVectorStore.load( path=VECTOR_DB_PATH_4_DUMP, embedding_model=embedding_model ) return preloaded_vector_store # Create new vector store if population is enabled # NOTE: Vector store population is typically done offline in production applications if POPULATE_VECTOR_DB: # Load documentation about BeeAI agents - this serves as our knowledge base # for answering questions about the different types of agents available loader = DocumentLoader.from_name( name="langchain:UnstructuredMarkdownLoader", file_path="docs/modules/agents.mdx" ) try: documents = await loader.load() except Exception as e: print(f"Failed to load documents: {e}") return None # Split documents into chunks text_splitter = TextSplitter.from_name( name="langchain:RecursiveCharacterTextSplitter", chunk_size=1000, chunk_overlap=200 ) documents = await text_splitter.split_documents(documents) print(f"Loaded {len(documents)} document chunks") # Create vector store and add documents vector_store = VectorStore.from_name(name="beeai:TemporalVectorStore", embedding_model=embedding_model) await vector_store.add_documents(documents=documents) print("Vector store populated with documents") return vector_store return Noneasync def main() -> None: """ Example demonstrating RequirementAgent using VectorStoreSearchTool. The agent will use the vector store search tool to find relevant information about BeeAI framework agents and provide comprehensive answers. Note: In typical applications, you would use a pre-populated vector store rather than populating it at runtime. This example includes population logic for demonstration purposes only. """ # Setup vector store with BeeAI documentation vector_store = await setup_vector_store() if vector_store is None: raise FileNotFoundError( "Failed to instantiate Vector Store. " "Either set POPULATE_VECTOR_DB=True to create a new one, or ensure the database file exists." ) # Create the vector store search tool search_tool = VectorStoreSearchTool(vector_store=vector_store) # Alternative: Create search tool using dynamic loading # embedding_model = EmbeddingModel.from_name("watsonx:ibm/slate-125m-english-rtrvr-v2", truncate_input_tokens=500) # search_tool = VectorStoreSearchTool.from_name( # name="beeai:TemporalVectorStore", # embedding_model=embedding_model # ) # Create RequirementAgent with the vector store search tool llm = ChatModel.from_name("ollama:llama3.1:8b") agent = RequirementAgent( llm=llm, memory=UnconstrainedMemory(), instructions=( "You are a helpful assistant that answers questions about the BeeAI framework. " "Use the vector store search tool to find relevant information from the documentation " "before providing your answer. Always search for information first, then provide a " "comprehensive response based on what you found." ), tools=[search_tool], # Log all tool calls to the console for easier debugging middlewares=[GlobalTrajectoryMiddleware(included=[Tool])], ) query = "What types of agents are available in BeeAI?" response = await agent.run(query) print(f"query: {query}\nResponse: {response.last_message.text}")if __name__ == "__main__": asyncio.run(main())
Leverage the Model Context Protocol (MCP) to define, initialize, and utilize tools on compatible MCP servers. These servers expose executable functionalities, enabling AI models to perform tasks such as computations, API calls, or system operations.
The Python tool allows AI agents to execute Python code within a secure, sandboxed environment. This tool enables access to files that are either provided by the user or created during execution.This enables agents to:
LocalPythonStorage – Handles where Python code is stored and run.
local_working_dir – A temporary folder where the code is saved before running.
interpreter_working_dir – The folder where the code actually runs, set by the CODE_INTERPRETER_TMPDIR setting.
PythonTool – Connects to an external Python interpreter to run code.
code_interpreter_url – The web address where the code gets executed (default: http://127.0.0.1:50081).
storage — Controls where the code is stored. By default, it saves files locally using LocalPythonStorage. You can set up a different storage option, like cloud storage, if needed.
Copy
Ask AI
import asyncioimport osimport sysimport tempfileimport tracebackfrom dotenv import load_dotenvfrom beeai_framework.adapters.ollama import OllamaChatModelfrom beeai_framework.agents.react import ReActAgentfrom beeai_framework.errors import FrameworkErrorfrom beeai_framework.memory import UnconstrainedMemoryfrom beeai_framework.tools.code import LocalPythonStorage, PythonTool# Load environment variablesload_dotenv()async def main() -> None: llm = OllamaChatModel("llama3.1") storage = LocalPythonStorage( local_working_dir=tempfile.mkdtemp("code_interpreter_source"), # CODE_INTERPRETER_TMPDIR should point to where code interpreter stores it's files interpreter_working_dir=os.getenv("CODE_INTERPRETER_TMPDIR", "./tmp/code_interpreter_target"), ) python_tool = PythonTool( code_interpreter_url=os.getenv("CODE_INTERPRETER_URL", "http://127.0.0.1:50081"), storage=storage, ) agent = ReActAgent(llm=llm, tools=[python_tool], memory=UnconstrainedMemory()) result = await agent.run("Calculate 5036 * 12856 and save the result to answer.txt").on( "update", lambda data, event: print(f"Agent 🤖 ({data.update.key}) : ", data.update.parsed_value) ) print(result.last_message.text) result = await agent.run("Read the content of answer.txt?").on( "update", lambda data, event: print(f"Agent 🤖 ({data.update.key}) : ", data.update.parsed_value) ) print(result.last_message.text)if __name__ == "__main__": try: asyncio.run(main()) except FrameworkError as e: traceback.print_exc() sys.exit(e.explain())
The Sandbox tool provides a way to define and run custom Python functions in a secure, sandboxed environment. It’s ideal when you need to encapsulate specific functionality that can be called by the agent.
Custom tools allow you to build your own specialized tools to extend agent capabilities.To create a new tool, implement the base Tool class. The framework provides flexible options for tool creation, from simple to complex implementations.
Initiate the Tool by passing your own handler (function) with the name, description and input schema.
Here’s an example of a simple custom tool that provides riddles:
Copy
Ask AI
import asyncioimport randomimport sysfrom typing import Anyfrom pydantic import BaseModel, Fieldfrom beeai_framework.context import RunContextfrom beeai_framework.emitter import Emitterfrom beeai_framework.errors import FrameworkErrorfrom beeai_framework.tools import StringToolOutput, Tool, ToolRunOptionsclass RiddleToolInput(BaseModel): riddle_number: int = Field(description="Index of riddle to retrieve.")class RiddleTool(Tool[RiddleToolInput, ToolRunOptions, StringToolOutput]): name = "Riddle" description = "It selects a riddle to test your knowledge." input_schema = RiddleToolInput data = ( "What has hands but can't clap?", "What has a face and two hands but no arms or legs?", "What gets wetter the more it dries?", "What has to be broken before you can use it?", "What has a head, a tail, but no body?", "The more you take, the more you leave behind. What am I?", "What goes up but never comes down?", ) def __init__(self, options: dict[str, Any] | None = None) -> None: super().__init__(options) def _create_emitter(self) -> Emitter: return Emitter.root().child( namespace=["tool", "example", "riddle"], creator=self, ) async def _run( self, input: RiddleToolInput, options: ToolRunOptions | None, context: RunContext ) -> StringToolOutput: index = input.riddle_number % (len(self.data)) riddle = self.data[index] return StringToolOutput(result=riddle)async def main() -> None: tool = RiddleTool() tool_input = RiddleToolInput(riddle_number=random.randint(0, len(RiddleTool.data))) result = await tool.run(tool_input) print(result)if __name__ == "__main__": try: asyncio.run(main()) except FrameworkError as e: sys.exit(e.explain())
The input schema (inputSchema) processing can be asynchronous when needed for more complex validation or preprocessing.
For structured data responses, use JSONToolOutput or implement your own custom output type.
When creating custom tools, follow these key requirements:1. Implement the Tool classTo create a custom tool, you need to extend the base Tool class and implement several required components. The output must be an implementation of the ToolOutput interface, such as StringToolOutput for text responses or JSONToolOutput for structured data.2. Create a descriptive nameYour tool needs a clear, descriptive name that follows naming conventions:
Copy
Ask AI
name = "MyNewTool"
The name must only contain characters a-z, A-Z, 0-9, or one of - or _.3. Write an effective descriptionThe description is crucial as it determines when the agent uses your tool:
Copy
Ask AI
description = "Takes X action when given Y input resulting in Z output"
You should experiment with different natural language descriptions to ensure the tool is used in the correct circumstances. You can also include usage tips and guidance for the agent in the description, but its advisable to keep the description succinct in order to reduce the probability of conflicting with other tools, or adversely affecting agent behavior.4. Define a clear input schemaCreate a Pydantic model that defines the expected inputs with helpful descriptions:
Copy
Ask AI
class OpenMeteoToolInput(BaseModel): location_name: str = Field(description="The name of the location to retrieve weather information.") country: str | None = Field(description="Country name.", default=None) start_date: str | None = Field( description="Start date for the weather forecast in the format YYYY-MM-DD (UTC)", default=None ) end_date: str | None = Field( description="End date for the weather forecast in the format YYYY-MM-DD (UTC)", default=None ) temperature_unit: Literal["celsius", "fahrenheit"] = Field( description="The unit to express temperature", default="celsius" )
Source: /python/beeai_framework/tools/weather/openmeteo.pyThe input schema is a required field used to define the format of the input to your tool. The agent will formalise the natural language input(s) it has received and structure them into the fields described in the tool’s input. The input schema will be created based on the MyNewToolInput class. Keep your tool input schema simple and provide schema descriptions to help the agent to interpret fields.5. Implement the _run() methodThis method contains the core functionality of your tool, processing the input and returning the appropriate output.
If your tool is providing data to the agent, try to ensure that the data is relevant and free of extraneous metatdata. Preprocessing data to improve relevance and minimize unnecessary data conserves agent memory, improving overall performance.
If your tool encounters an error that is fixable, you can return a hint to the agent; the agent will try to reuse the tool in the context of the hint. This can improve the agent’s ability
to recover from errors.
When building tools, consider that the tool is being invoked by a somewhat unpredictable third party (the agent). You should ensure that sufficient guardrails are in place to prevent
adverse outcomes.