This guide uses Docker to run Open WebUI with various deployment options.
Current Stable Version: v0.8.9 (March 2026)
For Docker installation, see Docker.
# Development/testing
docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
# Production (recommended - use versioned tag)
docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:v0.8.9
# Using CUDA image
docker run -d -p 3000:8080 --gpus all -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:cuda
# Production with versioned CUDA tag
docker run -d -p 3000:8080 --gpus all -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:v0.8.9-cuda
Create docker-compose.yml:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:v0.8.9
container_name: open-webui
ports:
- "3000:8080"
volumes:
- open-webui:/app/backend/data
restart: unless-stopped
environment:
- OLLAMA_BASE_URL=http://host.docker.internal:11434
volumes:
open-webui:
Start the service:
docker compose up -d
services:
open-webui:
image: ghcr.io/open-webui/open-webui:v0.8.9-cuda
container_name: open-webui
ports:
- "3000:8080"
volumes:
- open-webui:/app/backend/data
restart: unless-stopped
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
environment:
- OLLAMA_BASE_URL=http://host.docker.internal:11434
volumes:
open-webui:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:v0.8.9
container_name: open-webui
ports:
- "3000:8080"
volumes:
- open-webui:/app/backend/data
restart: unless-stopped
environment:
- OLLAMA_BASE_URL=http://ollama:11434
depends_on:
- ollama
ollama:
image: ollama/ollama:latest
container_name: ollama
ports:
- "11434:11434"
volumes:
- ollama:/root/.ollama
restart: unless-stopped
# Uncomment for GPU support
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: all
# capabilities: [gpu]
volumes:
open-webui:
ollama:
For multi-worker deployments, use an external ChromaDB HTTP server:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:v0.8.9
container_name: open-webui
ports:
- "3000:8080"
volumes:
- open-webui:/app/backend/data
restart: unless-stopped
environment:
- OLLAMA_BASE_URL=http://ollama:11434
- VECTOR_DB=chroma
- CHROMA_HTTP_HOST=chromadb
- CHROMA_HTTP_PORT=8000
- UVICORN_WORKERS=4
depends_on:
- ollama
- chromadb
ollama:
image: ollama/ollama:latest
container_name: ollama
volumes:
- ollama:/root/.ollama
restart: unless-stopped
chromadb:
image: chromadb/chroma:latest
container_name: chromadb
ports:
- "8000:8000"
volumes:
- chromadb:/chroma/chroma
restart: unless-stopped
volumes:
open-webui:
ollama:
chromadb:
For daemonless environments (RHEL/Fedora):
podman run -d -p 3000:8080 -v open-webui:/app/backend/data:Z --name open-webui ghcr.io/open-webui/open-webui:v0.8.9
# podman-kube.yml
apiVersion: v1
kind: Pod
metadata:
name: open-webui
spec:
containers:
- name: open-webui
image: ghcr.io/open-webui/open-webui:v0.8.9-cuda
ports:
- containerPort: 8080
hostPort: 3000
volumeMounts:
- name: data
mountPath: /app/backend/data
resources:
limits:
nvidia.com/gpu: 1
volumes:
- name: data
persistentVolumeClaim:
claimName: open-webui-data
Deploy with:
podman kube play podman-kube.yml
| Mount Point | Purpose |
|---|---|
/app/backend/data |
All user data (chats, settings, models, database) |
Named volume (recommended): open-webui
Bind mount alternative: -v /opt/open-webui/data:/app/backend/data
| Port | Purpose |
|---|---|
| 8080 | Internal container port (Open WebUI) |
| 3000 | Default external port (configurable) |
| 11434 | Ollama (if running separately) |
| 8000 | ChromaDB (if running separately) |
⚠️ Critical: Version 0.8.0+ includes long-running database migrations.
# Stop current container
docker stop open-webui && docker rm open-webui
# Pull new image
docker pull ghcr.io/open-webui/open-webui:v0.8.9
# Start new container (data persists in volume)
docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:v0.8.9
⚠️ Must follow this procedure to prevent database corruption:
UVICORN_WORKERS=1 or single replicaservices:
open-webui:
image: ghcr.io/open-webui/open-webui:v0.8.9
# ... other config ...
labels:
- "com.centurylinklabs.watchtower.enable=true"
watchtower:
image: nicholas-fedor/watchtower:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: --interval 300
restart: unless-stopped
:main)UVICORN_WORKERS countCORS_ALLOW_ORIGIN to your domainAny questions?
Feel free to contact us. Find all contact information on our contact page.