This guide uses Docker to run Local Deep Research.
For Docker installation, see Docker.
Works on macOS (M1/M2/M3/M4 and Intel), Windows, and Linux:
curl -O https://raw.githubusercontent.com/LearningCircuit/local-deep-research/main/docker-compose.yml
docker compose up -d
For hardware-accelerated inference:
curl -O https://raw.githubusercontent.com/LearningCircuit/local-deep-research/main/docker-compose.yml
curl -O https://raw.githubusercontent.com/LearningCircuit/local-deep-research/main/docker-compose.gpu.override.yml
docker compose -f docker-compose.yml -f docker-compose.gpu.override.yml up -d
Prerequisites for GPU: Install the NVIDIA Container Toolkit first.
# Step 1: Pull and run Ollama
docker run -d -p 11434:11434 --name ollama ollama/ollama
docker exec ollama ollama pull gemma3:12b
# Step 2: Pull and run SearXNG
docker run -d -p 8080:8080 --name searxng searxng/searxng
# Step 3: Pull and run Local Deep Research
docker run -d -p 5000:5000 --network host \
--name local-deep-research \
--volume 'deep-research:/data' \
-e LDR_DATA_DIR=/data \
localdeepresearch/local-deep-research
Open http://localhost:5000 in your browser after ~30 seconds.
Note: The first startup may take 5-10 minutes while the Ollama model downloads.
The official docker-compose.yml includes three services:
| Service | Image | Port | Description |
|---|---|---|---|
local-deep-research |
localdeepresearch/local-deep-research:latest |
5000 | Main web application |
ollama |
ollama/ollama:latest |
11434 (internal) | Local LLM inference |
searxng |
searxng/searxng:latest |
8080 (internal) | Meta search engine |
| Volume | Purpose |
|---|---|
ldr_data |
Application data (user databases, API keys, research history) |
ollama_data |
Downloaded Ollama models |
searxng_data |
SearXNG configuration |
# Start services
docker compose up -d
# View logs
docker compose logs -f
# Stop services
docker compose down
# Update to latest version
docker compose pull && docker compose up -d
# Remove all data (fresh start)
docker compose down -v
Specify a model with the LDR_LLM_MODEL environment variable:
LDR_LLM_MODEL=gemma3:4b docker compose up -d
The model will be automatically pulled if not already available.
Modify the port mapping in docker-compose.yml:
ports:
- "8080:5000" # Expose on port 8080 instead of 5000
Any questions?
Feel free to contact us. Find all contact information on our contact page.