Skip to main content
The OpenAI API provides a simple interface to state-of-the-art AI models for text generation, natural language processing, computer vision, and more.

OpenAI Server

OpenAIServer allows you to expose your agents and LLMs to external systems that support the Chat completion or Responses API. Key benefits
  • Fast setup with minimal configuration
  • Support for Chat completion and Responses API
  • Register multiple agents and LLMs on a single server
  • Custom server settings
from beeai_framework.adapters.openai.serve.server import OpenAIAPIType, OpenAIServer, OpenAIServerConfig
from beeai_framework.agents.requirement import RequirementAgent
from beeai_framework.backend import ChatModel
from beeai_framework.memory import UnconstrainedMemory
from beeai_framework.tools.search.duckduckgo import DuckDuckGoSearchTool
from beeai_framework.tools.weather import OpenMeteoTool


def main() -> None:
    llm = ChatModel.from_name("ollama:granite4:micro")
    agent = RequirementAgent(
        llm=llm,
        tools=[DuckDuckGoSearchTool(), OpenMeteoTool()],
        memory=UnconstrainedMemory(),
    )

    server = OpenAIServer(config=OpenAIServerConfig(port=9998, api=OpenAIAPIType.RESPONSES))
    server.register(agent, name="agent")
    server.register(llm)
    server.serve()


if __name__ == "__main__":
    main()

You can easily call the exposed entities via cURL.
curl --location 'http://0.0.0.0:9998/responses' \
--header 'Content-Type: application/json' \
--data '{
"model": "agent",
"conversation": "123",
"stream": false,
"input": "Hello, how are you?"
}'