Clone the starter repo
Chose your prefered programming language and get started with the BeeAI Framework starter template.
Install the dependencies
If you’re using python, make sure you have uv installed.
Configure your LLM Backend
If you choose to run a local model, Ollama must be installed and running, with the granite3.3 model pulled. If you run into issues, run orIf you chose to use a hosted model, edit the Add your API key for your preferred provider to your
ollama list to verify the model name and ensure granite3.3 is installed or that your alias points to it.shell
LLM_CHAT_MODEL_NAME in the .env file.example
.env file and uncomment the line.example