This guide covers Docker requirements and usage for LLM Harbor. Harbor uses Docker Compose under the hood to orchestrate all services.
docker --version
docker compose version
For Docker installation, see Docker.
Harbor is a CLI tool that manages Docker Compose stacks. It:
harbor ejectHarbor itself is NOT a container - it’s installed via pip/npm and runs on the host system.
First, install the Harbor CLI on your host system:
# Recommended: Install via curl script
curl -fsSL https://raw.githubusercontent.com/av/harbor/main/install.sh | bash
# Or via pip
pip install llm-harbor
# Or via npm
npm install -g @avcodes/harbor
# Run Harbor doctor to check Docker and system compatibility
harbor doctor
Expected output:
Start the default stack (Open WebUI + Ollama):
harbor up
Start with additional services:
# Add Web RAG
harbor up searxng
# Add voice chat (STT/TTS)
harbor up speaches
# Multiple backends
harbor up ollama llamacpp vllm
# Image generation
harbor up comfyui
# Full workflow platform
harbor up dify
# Open in browser
harbor open
# Get URL
harbor url webui
# Print QR code for mobile access
harbor qr
Harbor provides Docker management commands:
# View service logs
harbor logs <service>
# Execute command in container
harbor exec <service> <command>
# Shell into container
harbor shell <service>
# Stop services
harbor down
Export your Harbor setup to a standalone Docker Compose file:
# Export specific services
harbor eject searxng llamacpp > docker-compose.harbor.yml
# Then run with docker compose
docker compose -f docker-compose.harbor.yml up -d
Harbor uses official images from trusted sources:
| Service | Docker Image |
|---|---|
| Ollama | ollama/ollama |
| Open WebUI | ghcr.io/open-webui/open-webui |
| vLLM | vllm/vllm-openai |
| llama.cpp | ghcr.io/ggerganov/llama.cpp |
| SearXNG | searxng/searxng |
| ComfyUI | comfyanonymous/comfyui |
| Speaches | ghcr.io/speaches-ai/speaches |
| Dify | langgenius/dify |
| Traefik | traefik:latest |
| Qdrant | qdrant/qdrant |
Ensure NVIDIA Container Toolkit is installed:
# Verify GPU access
docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu22.04 nvidia-smi
Configure AMD GPU runtime:
# Required for AMD GPU support
amd-ctk runtime configure
sudo systemctl restart docker
# Verify GPU access
docker run --rm --runtime=amd -e AMD_VISIBLE_DEVICES=all ubuntu ls -l /dev/dri
Any questions?
Feel free to contact us. Find all contact information on our contact page.