Yes, it’s absolutely possible! You can run a small LLM like Gemma3 4b using Ollama in your basic CircleCI pipeline to integrate AI capabilities directly into your CI/CD workflows. Of course its capabilities are limited, but you can use it for agents or semantic unittests.
Here is an example of a CircleCI config using Ollama, and runs on the free plan (large resource). It demonstrates how to use the Ollama Docker image in a CI pipeline and assumes you want to pull a model and run a basic script using the Ollama service.
jobs:
ollama-example:
docker:
- image: cimg/python:3.9
- image: ollama/ollama:latest
name: ollama
resource_class: large
steps:
- checkout
- run:
name: Wait for Ollama to start
command: |
until curl -s http://ollama:11434/; do
echo "Waiting for Ollama to start..."
sleep 5
done
- run:
name: Pull Gemma3 Model Using Web API
command: |
curl -X POST http://ollama:11434/api/pull \
-H "Content-Type: application/json" \
-d '{"model": "gemma3:4b"}'
- run:
name: Run a Python script using Ollama
command: |
python script.py
workflows:
version: 2
ollama-workflow:
jobs:
- ollama-example
And the Python script:
import requests
from pprint import pprint
response = requests.post(
'http://ollama:11434/api/completion',
json={'model': 'gemma3:4b', 'prompt': 'Hello, Ollama!'}
)
pprint(response.json())
This configuration is simple and can be used as a starting point to work on integrating Ollama into a CI pipeline.