This guide walks through a self-hosted installation of Local Deep Research.
For Docker installation, see Docker.
Fetch the official Docker Compose file from the project repo.
curl -O https://raw.githubusercontent.com/LearningCircuit/local-deep-research/main/docker-compose.yml
Launch the full stack (includes Local Deep Research, Ollama, and SearXNG).
docker compose up -d
Open http://localhost:5000 in your browser after ~30 seconds.
Note: The first startup may take 5-10 minutes while the Ollama model downloads.
For hardware-accelerated inference on Linux:
curl -O https://raw.githubusercontent.com/LearningCircuit/local-deep-research/main/docker-compose.yml
curl -O https://raw.githubusercontent.com/LearningCircuit/local-deep-research/main/docker-compose.gpu.override.yml
docker compose -f docker-compose.yml -f docker-compose.gpu.override.yml up -d
The Docker Compose stack includes three services:
| Service | Description | Port |
|---|---|---|
local-deep-research |
Main web application | 5000 |
ollama |
Local LLM inference engine | 11434 (internal) |
searxng |
Privacy-focused meta search engine | 8080 (internal) |
pip install local-deep-researchAny questions?
Feel free to contact us. Find all contact information on our contact page.