LLM Harbor orchestrates multiple model backends, APIs, and frontends via Docker Compose. Security relies on proper access controls, network segmentation, and careful exposure of services.
- Harbor is in active development (v0.x) - Not designed for production internet exposure without additional hardening
- First launch requires creating a local admin account for Open WebUI - This is required before any internet exposure
- Tunneling exposes services to the internet - Use
harbor tunnel with authentication enabled
By default, Harbor services bind to localhost. Keep this configuration for local development:
# Default setup - localhost only
harbor up
For local network access without internet exposure:
# Get LAN URL
harbor url webui
# Print QR code for mobile access
harbor qr
If you must expose services to the internet:
# Use tunnel with authentication
harbor tunnel
# Expose specific service
harbor tunnel vllm
Before exposing to internet:
- Enable authentication on all services
- Set up TLS/HTTPS termination
- Configure firewall rules to limit access
- Consider using a VPN instead of direct exposure
- Harbor services run in Docker containers with default isolation
- Avoid mounting unnecessary host paths
- Do not expose Docker socket to containers
- Keep Docker and Docker Compose updated:
- Docker 20.10+ required
- Docker Compose 2.23.1+ required
Harbor uses official images from trusted sources:
- Ollama:
ollama/ollama
- Open WebUI:
ghcr.io/open-webui/open-webui
- vLLM:
vllm/vllm-openai
- llama.cpp:
ghcr.io/ggerganov/llama.cpp
¶ 3) Protect API Keys and Credentials
If using external model providers:
- Store API keys in environment variables, not in code
- Use
harbor config set for sensitive values
- Rotate keys regularly
- Restrict API key permissions to minimum required
- Create strong admin password on first Open WebUI launch
- Use unique passwords for each service
- Enable 2FA where supported
Ensure NVIDIA Container Toolkit is properly configured:
# Verify GPU access
docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu22.04 nvidia-smi
AMD GPU users must configure runtime properly:
# Required for AMD GPU support
amd-ctk runtime configure
sudo systemctl restart docker
# Verify GPU access
docker run --rm --runtime=amd -e AMD_VISIBLE_DEVICES=all ubuntu ls -l /dev/dri
- Create admin account immediately on first launch
- Disable user registration if not needed
- Review and restrict model access
- By default binds to localhost:11434
- For LAN access, configure
OLLAMA_HOST=0.0.0.0:11434
- Pull only trusted models
- Disable logging if privacy is required
- Configure rate limiting
- Use HTTPS for all upstream connections
¶ 6) Monitoring and Auditing
# View service logs
harbor logs <service>
# Check running containers
docker ps
- Review Open WebUI access logs
- Monitor Docker container activity
- Check reverse proxy logs if using Traefik
- Harbor Repository: https://github.com/av/harbor
- Harbor Wiki: https://github.com/av/harbor/wiki
- Docker Security: https://docs.docker.com/engine/security/
Any questions?
Feel free to contact us. Find all contact information on our contact page.