This guide shows how to configure OpenClaw to use a locally running LLM (such as Ollama) instead of cloud-based APIs. This approach offers privacy benefits and eliminates API costs.
First, confirm your local LLM service is operational:
# For Ollama
curl http://localhost:11434/api/tags
# For LocalAI
curl http://localhost:8080/models
Make sure your desired model is loaded and available.
Modify your .env file to use the local LLM:
# Remove or comment out cloud API keys
# OPENAI_API_KEY=sk-your-api-key-here
# ANTHROPIC_API_KEY=sk-ant-your-api-key-here
# Set the model provider to ollama or local
MODEL_PROVIDER=ollama # or localai, depending on your service
# Point to your local LLM endpoint
# For Docker Desktop on Mac/Windows:
OLLAMA_BASE_URL=http://host.docker.internal:11434
# For Linux hosts with Docker:
# OLLAMA_BASE_URL=http://172.17.0.1:11434
# Specify the model to use
OLLAMA_MODEL=llama3.1 # Replace with your model name
# Alternative for LocalAI
# LOCALAI_BASE_URL=http://host.docker.internal:8080
# LOCALAI_MODEL=ggml-model-name
Update your docker-compose.yml to allow the OpenClaw container to access the host network:
version: '3.8'
services:
openclaw-gateway:
image: ${OPENCLAW_IMAGE:-openclaw:local}
container_name: openclaw-gateway
restart: unless-stopped
ports:
- "${OPENCLAW_GATEWAY_PORT:-18789}:18789"
- "${OPENCLAW_BRIDGE_PORT:-18790}:18790"
volumes:
- ${OPENCLAW_CONFIG_DIR}:/home/node/.openclaw
- ${OPENCLAW_WORKSPACE_DIR}:/home/node/.openclaw/workspace
environment:
- HOME=/home/node
- OPENCLAW_GATEWAY_TOKEN=${OPENCLAW_GATEWAY_TOKEN}
# Ollama configuration
- OLLAMA_BASE_URL=http://host.docker.internal:11434
- MODEL_PROVIDER=ollama
- OLLAMA_MODEL=llama3.1
command: ["node", "dist/index.js", "gateway", "--bind", "${OPENCLAW_GATEWAY_BIND:-lan}", "--port", "18789"]
# Allow access to host services
extra_hosts:
- "host.docker.internal:host-gateway"
openclaw-cli:
image: ${OPENCLAW_IMAGE:-openclaw:local}
container_name: openclaw-cli
restart: unless-stopped
volumes:
- ${OPENCLAW_CONFIG_DIR}:/home/node/.openclaw
- ${OPENCLAW_WORKSPACE_DIR}:/home/node/.openclaw/workspace
environment:
- HOME=/home/node
- OPENCLAW_GATEWAY_TOKEN=${OPENCLAW_GATEWAY_TOKEN}
- BROWSER=echo
entrypoint: ["node", "dist/index.js"]
stdin_open: true
tty: true
For a cleaner setup, you can run Ollama in a Docker container on the same network as OpenClaw:
version: '3.8'
services:
ollama:
image: ollama/ollama:latest
container_name: ollama
volumes:
- ./ollama-data:/root/.ollama
ports:
- "11434:11434"
networks:
- openclaw-net
# Optional: GPU support for NVIDIA
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
openclaw-gateway:
image: ${OPENCLAW_IMAGE:-openclaw:local}
container_name: openclaw-gateway
restart: unless-stopped
ports:
- "${OPENCLAW_GATEWAY_PORT:-18789}:18789"
- "${OPENCLAW_BRIDGE_PORT:-18790}:18790"
volumes:
- ${OPENCLAW_CONFIG_DIR}:/home/node/.openclaw
- ${OPENCLAW_WORKSPACE_DIR}:/home/node/.openclaw/workspace
environment:
- HOME=/home/node
- OPENCLAW_GATEWAY_TOKEN=${OPENCLAW_GATEWAY_TOKEN}
# Point to Ollama service on same network
- OLLAMA_BASE_URL=http://ollama:11434
- MODEL_PROVIDER=ollama
- OLLAMA_MODEL=llama3.1
command: ["node", "dist/index.js", "gateway", "--bind", "${OPENCLAW_GATEWAY_BIND:-lan}", "--port", "18789"]
depends_on:
- ollama
networks:
- openclaw-net
openclaw-cli:
image: ${OPENCLAW_IMAGE:-openclaw:local}
container_name: openclaw-cli
restart: unless-stopped
volumes:
- ${OPENCLAW_CONFIG_DIR}:/home/node/.openclaw
- ${OPENCLAW_WORKSPACE_DIR}:/home/node/.openclaw/workspace
environment:
- HOME=/home/node
- OPENCLAW_GATEWAY_TOKEN=${OPENCLAW_GATEWAY_TOKEN}
- BROWSER=echo
entrypoint: ["node", "dist/index.js"]
stdin_open: true
tty: true
networks:
- openclaw-net
networks:
openclaw-net:
driver: bridge
Once Ollama is running, pull your desired model:
# If Ollama is running on host
ollama pull llama3.1
# If Ollama is in Docker
docker exec ollama ollama pull llama3.1
cat > .env << EOF
# Gateway access token
OPENCLAW_GATEWAY_TOKEN=your-random-token-here
# Gateway ports
OPENCLAW_GATEWAY_PORT=18789
OPENCLAW_BRIDGE_PORT=18790
OPENCLAW_GATEWAY_BIND=lan
# Volume paths
OPENCLAW_CONFIG_DIR=/opt/openclaw/config
OPENCLAW_WORKSPACE_DIR=/opt/openclaw/workspace
# Docker image
OPENCLAW_IMAGE=ghcr.io/openclaw/openclaw:main
# Ollama configuration (when running Ollama in Docker)
OLLAMA_BASE_URL=http://ollama:11434
MODEL_PROVIDER=ollama
OLLAMA_MODEL=llama3.1
EOF
After making these changes, restart your services:
# Stop existing services
docker compose down
# Start with new configuration
docker compose up -d
# If running Ollama separately, make sure it's running first
docker compose up -d ollama
Test that OpenClaw can communicate with your local LLM:
# Check that all services are running
docker compose ps
# Check OpenClaw logs for any connection errors
docker compose logs openclaw-gateway
# Test the API connection (from within the container)
docker compose exec openclaw-gateway curl -X POST http://ollama:11434/api/generate -d '{"model": "llama3.1", "prompt": "test", "stream": false}'
# Or if Ollama is on host
docker compose exec openclaw-gateway curl -X POST http://host.docker.internal:11434/api/generate -d '{"model": "llama3.1", "prompt": "test", "stream": false}'
http://localhost:18789/?token=<your-token>http://ollama:11434http://host.docker.internal:11434llama3.1)host.docker.internal doesn’t work on Linux: Try 172.17.0.1 (default Docker bridge gateway) or run Ollama in the same Docker networkollama list (or docker exec ollama ollama list) to see available modelsollama pull llama3.1~/.openclaw/openclaw.jsonAny questions?
Feel free to contact us. Find all contact information on our contact page.