This guide uses Docker to run LocalAI.
For Docker installation, see Docker.
Basic Docker run command:
docker run -p 8080:8080 -v /path/to/models:/models --name local-ai -ti localai/localai:latest
With Docker Compose:
services:
localai:
image: localai/localai:latest
container_name: local-ai
restart: unless-stopped
ports:
- "8080:8080"
volumes:
- ./models:/models
GPU acceleration (NVIDIA):
services:
localai:
image: localai/localai:latest-gpu-nvidia-cuda-12
container_name: local-ai
restart: unless-stopped
ports:
- "8080:8080"
volumes:
- ./models:/models
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
/models to persist model files across container restartsAny questions?
Feel free to contact us. Find all contact information on our contact page.