Build intelligent agents that combine retrieval with generation for enhanced AI capabilities
Component | Description | Compatibility | Future Compatibility |
---|---|---|---|
Document Loaders | Responsible for loading content from different formats and sources such as PDFs, web pages, and structured text files | LangChain | BeeAI |
Text Splitters | Splits long documents into workable chunks using various strategies, e.g. fixed length or preserving context | LangChain | BeeAI |
Document | The basic data structure to house text content, metadata, and relevant scores for retrieval operations | BeeAI | - |
Vector Store | Used to store document embeddings and retrieve them based on semantic similarity using embedding distance | LangChain | BeeAI, Llama-Index |
Document Processors | Used to process and refine documents during the retrieval-generation lifecycle including reranking and filtering | Llama-Index | - |
from_name
method uses the format provider:ClassName
where:
provider
identifies the integration module (e.g., “beeai”, “langchain”)ClassName
specifies the exact class to instantiatefrom beeai_framework.adapters.beeai.backend.vector_store import TemporalVectorStore
.VectorStoreSearchTool
enables any agent to perform semantic search against a pre-populated vector store. This provides flexibility for agents that need retrieval capabilities alongside other functionalities.
VectorStoreSearchTool.from_vector_store_name("beeai:TemporalVectorStore", embedding_model=embedding_model)
, see RAG with RequirementAgent example for the full code.