This guide uses Docker to run Ollama.
For Docker installation, see Docker.
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Requires NVIDIA Container Toolkit installed.
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm
Create docker-compose.yml:
services:
ollama:
image: ollama/ollama:latest
restart: unless-stopped
ports:
- "11434:11434"
volumes:
- ollama:/root/.ollama
# For NVIDIA GPU, uncomment:
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: all
# capabilities: [gpu]
volumes:
ollama:
Then run:
docker compose up -d
11434/root/.ollama in container