LLM Harbor is a CLI tool and companion application that lets you spin up a complete local LLM stack with a single command. It orchestrates Docker Compose services including LLM backends, frontends, and supporting toolsโall pre-wired to work together without manual configuration. Harbor manages 50+ services including Ollama, Open WebUI, vLLM, llama.cpp, SearXNG, Dify, and more. It is designed for local LLM experimentation and development, not production deployment.
- ๐ One-command setup with
harbor up
- ๐ Support for 15+ LLM backends (Ollama, vLLM, llama.cpp, TGI, SGLang)
- ๐ MCP ecosystem integration (MetaMCP, MCPO)
- ๐จ Image generation with ComfyUI + Flux
- ๐ Local Web RAG with SearXNG + Perplexica
- ๐ LLM workflow platforms (Dify, n8n, LangFlow, Flowise)
- ๐ค Voice chat via Speaches (STT/TTS)
- ๐ฑ QR codes for mobile/LAN access
- ๐ Built-in tunneling via Traefik
- โ๏ธ Configuration profiles for different scenarios
- ๐ Local command history for reproducible setups
- ๐ฆ Export to standalone Docker Compose with
harbor eject
- ๐ Harbor Boost for LLM workflow scripting
- Local LLM development environment
- Quick experimentation with different model backends
- Testing LLM workflows and RAG pipelines
- Running self-hosted AI tools for privacy
- Prototyping before production deployment
- Python (CLI tool)
- Docker, Docker Compose
- Bash
- Active development (v0.x)
- Current version: 0.3.34 (January 2026)
- Open-source and self-hosted
- Discord community: https://discord.gg/8nDRphrhSF
ยถ History and References
Any questions?
Feel free to contact us. Find all contact information on our contact page.