AnythingLLM should be configured for RAG connector security, workspace isolation, and model endpoint control. This guide covers all configuration options for the Docker deployment.
Version: v1.11.1 (Mar 2026)
# Application Settings
STORAGE_DIR=/app/server/storage
WORKSPACES_DIR=/app/server/storage/workspaces
NODE_ENV=production
DISABLE_TELEMETRY=false
# LLM Provider Configuration
LLM_PROVIDER=ollama
OLLAMA_BASE_PATH=http://ollama:11434
# Alternative: OpenAI
# LLM_PROVIDER=openai
# OPENAI_API_KEY=sk-your-api-key-here
# Alternative: Anthropic
# LLM_PROVIDER=anthropic
# ANTHROPIC_API_KEY=sk-ant-your-api-key-here
# Alternative: Azure OpenAI
# LLM_PROVIDER=azureOpenai
# AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
# AZURE_OPENAI_API_KEY=your-azure-api-key
# AZURE_OPENAI_DEPLOYMENT_NAME=your-deployment-name
# Vector Database Configuration
VECTOR_DB=lancedb
# Alternative: Chroma
# VECTOR_DB=chroma
# CHROMA_ENDPOINT=http://chroma:8000
# Alternative: Pinecone
# VECTOR_DB=pinecone
# PINECONE_API_KEY=your-pinecone-api-key
# PINECONE_INDEX_NAME=your-index-name
# Alternative: Qdrant
# VECTOR_DB=qdrant
# QDRANT_ENDPOINT=http://qdrant:6333
# QDRANT_API_KEY=your-qdrant-api-key
# Security Settings
JWT_SECRET=replace-with-long-random-secret-min-32-chars
MULTI_USER_MODE=false
# Network Settings
ANYTHINGLLM_URL=https://anythingllm.example.com
PORT=3001
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your-api-key-here
OPENAI_BASE_PATH=https://api.openai.com/v1
# Optional: Custom base URL for proxies
# OPENAI_BASE_PATH=https://your-proxy.com/v1
Supported Models:
gpt-4o (recommended)gpt-4-turbogpt-4gpt-3.5-turboLLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-your-api-key-here
# Optional: Custom base URL
# ANTHROPIC_BASE_PATH=https://api.anthropic.com
Supported Models:
claude-3-5-sonnet-20241022 (recommended)claude-3-opus-20240229claude-3-sonnet-20240229claude-3-haiku-20240307LLM_PROVIDER=googleGenerativeAI
GEMINI_API_KEY=your-gemini-api-key
Supported Models:
gemini-progemini-1.5-progemini-1.5-flashLLM_PROVIDER=azureOpenai
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_API_KEY=your-azure-api-key
AZURE_OPENAI_DEPLOYMENT_NAME=your-deployment-name
AZURE_OPENAI_API_VERSION=2024-02-15-preview
LLM_PROVIDER=ollama
OLLAMA_BASE_PATH=http://ollama:11434
# For Ollama on host (Docker)
# OLLAMA_BASE_PATH=http://host.docker.internal:11434
# For Ollama on localhost
# OLLAMA_BASE_PATH=http://localhost:11434
Supported Models:
llama3.1mistralphi3codellamaLLM_PROVIDER=lmstudio
LMSTUDIO_BASE_PATH=http://localhost:1234/v1
LLM_PROVIDER=groq
GROQ_API_KEY=your-groq-api-key
Supported Models:
llama3-70b-8192llama3-8b-8192mixtral-8x7b-32768LLM_PROVIDER=mistral
MISTRAL_API_KEY=your-mistral-api-key
LLM_PROVIDER=togetherai
TOGETHERAI_API_KEY=your-together-api-key
LLM_PROVIDER=openrouter
OPENROUTER_API_KEY=your-openrouter-api-key
VECTOR_DB=lancedb
No additional configuration needed. Data stored in ./storage/vector-cache/lancedb.
VECTOR_DB=chroma
CHROMA_ENDPOINT=http://chroma:8000
# For local Chroma
# CHROMA_ENDPOINT=http://localhost:8000
VECTOR_DB=pinecone
PINECONE_API_KEY=your-pinecone-api-key
PINECONE_INDEX_NAME=anythingllm-index
PINECONE_ENVIRONMENT=us-east-1
VECTOR_DB=qdrant
QDRANT_ENDPOINT=http://qdrant:6333
QDRANT_API_KEY=your-qdrant-api-key
# For Qdrant Cloud
# QDRANT_ENDPOINT=https://your-cluster.qdrant.tech
VECTOR_DB=weaviate
WEAVIATE_ENDPOINT=http://weaviate:8080
WEAVIATE_API_KEY=your-weaviate-api-key
# For Weaviate Cloud
# WEAVIATE_ENDPOINT=https://your-cluster.weaviate.network
VECTOR_DB=milvus
MILVUS_ENDPOINT=http://milvus:19530
MILVUS_USERNAME=root
MILVUS_PASSWORD=Milvus
EMBEDDING_ENGINE=ollama
OLLAMA_EMBEDDING_MODEL=nomic-embed-text
EMBEDDING_ENGINE=openai
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
EMBEDDING_ENGINE=azureOpenai
AZURE_EMBEDDING_DEPLOYMENT=text-embedding-ada-002
EMBEDDING_ENGINE=localai
LOCALAI_BASE_PATH=http://localhost:8080
LOCALAI_EMBEDDING_MODEL=text-embedding-ada-002
Modern configuration with TLS 1.2/1.3 support:
# HTTP to HTTPS redirect
server {
listen 80;
server_name anythingllm.example.com;
return 301 https://$server_name$request_uri;
}
# HTTPS server
server {
listen 443 ssl http2;
server_name anythingllm.example.com;
# SSL Configuration
ssl_certificate /etc/letsencrypt/live/anythingllm.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/anythingllm.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Security headers
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header Referrer-Policy "strict-origin-when-cross-origin";
# Timeouts
proxy_connect_timeout 605;
proxy_send_timeout 605;
proxy_read_timeout 605;
send_timeout 605;
keepalive_timeout 605;
# Disable buffering for streaming responses
proxy_buffering off;
proxy_cache off;
# Main location
location / {
proxy_pass http://localhost:3001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
}
# WebSocket support for agent protocol
location ~* ^/api/agent-invocation/(.*) {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Health check endpoint (no auth required)
location /api/health {
proxy_pass http://localhost:3001;
access_log off;
}
}
anythingllm.example.com {
reverse_proxy localhost:3001
# Automatic HTTPS
# Automatic TLS certificate management
}
labels:
- "traefik.enable=true"
- "traefik.http.routers.anythingllm.rule=Host(`anythingllm.example.com`)"
- "traefik.http.routers.anythingllm.entrypoints=websecure"
- "traefik.http.routers.anythingllm.tls.certresolver=letsencrypt"
- "traefik.http.services.anythingllm.loadbalancer.server.port=3001"
Multi-user mode is only available in the Docker version.
http://localhost:3001MULTI_USER_MODE=true
JWT_SECRET=your-secure-random-secret-min-32-chars
Important: AnythingLLM has disclosed 13 security advisories (Jan 2024 - Mar 2026). Always run v1.10.0 or later.
| GHSA ID | CVE | Vulnerability | Severity | Fixed |
|---|---|---|---|---|
| GHSA-rrmw-2j6x-4mf2 | - | XSS to RCE via LLM Response Injection | Critical | - |
| GHSA-gm94-qc2p-xcwf | CVE-2026-24477 | API key leak in systemSettings.js | High (8.7) | v1.10.0 |
| GHSA-jp2f-99h9-7vjv | CVE-2026-24478 | Path traversal in DrupalWiki | High (7.2) | v1.10.0 |
| GHSA-jwjx-mw2p-5wc7 | - | SQL Injection in SQL Agent Plugin | High | - |
| GHSA-24qj-pw4h-3jmm | - | Permissive CORS policy | High | - |
| GHSA-7hpg-6pc7-cx86 | - | Ollama token leak | High | - |
| GHSA-2qmm-82f7-8qj5 | - | IDOR Cross-User Chat Feedback | Moderate | - |
| GHSA-rh66-4w74-cf4m | - | Zip Slip Path Traversal | Moderate | - |
| GHSA-p5rf-8p88-979c | - | Cross-Workspace IDOR | Moderate | - |
| GHSA-47vr-w3vm-69ch | - | Username Enumeration | Moderate | - |
| GHSA-xmj6-g32r-fc5q | - | Unauthenticated DOS | High | - |
| GHSA-wfq3-65gm-3g2p | - | Manager Privilege Bypass | Low | - |
| GHSA-7754-8jcc-2rg3 | - | Suspended Users API Key Access | Low | - |
Source: GitHub Security Advisories
Configure in Nginx:
# Rate limiting zone
limit_req_zone $binary_remote_addr zone=anythingllm_limit:10m rate=10r/s;
# Apply in location block
location / {
limit_req zone=anythingllm_limit burst=20 nodelay;
# ... rest of config ...
}
Generate API keys in the Web UI:
For API access from other domains:
# In storage/.env
CORS_ALLOWED_ORIGINS=https://app1.example.com,https://app2.example.com
curl -X POST http://localhost:3001/api/v1/workspace \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "My Workspace",
"slug": "my-workspace",
"description": "Workspace description"
}'
# Default workspace settings
DEFAULT_WORKSPACE_VISIBILITY=private
DEFAULT_WORKSPACE_ACCESS=restricted
# Check .env file
cat storage/.env
# Restart container
docker compose restart anythingllm
# Check logs
docker compose logs anythingllm | grep -i config
# Test API endpoint
curl -X POST http://localhost:11434/api/generate \
-d '{"model": "llama3.1", "prompt": "test", "stream": false}'
# Check network connectivity
docker compose exec anythingllm ping ollama
# Check vector cache directory
ls -la storage/vector-cache/
# Clear vector cache (re-index documents)
rm -rf storage/vector-cache/*
# Restart and re-index
docker compose restart anythingllm
Squeezing every bit of performance from your Anythingllm installation? Our experts help with:
Optimize your setup: office@linux-server-admin.com | Contact Us