Quick start guide to get LiteLLM proxy server running in under 5 minutes.
pip install litellm
from litellm import completion
import os
os.environ["OPENAI_API_KEY"] = "sk-your-key"
response = completion(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
pip install 'litellm[proxy]'
Create litellm_config.yaml:
model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_key: os.environ/OPENAI_API_KEY
export OPENAI_API_KEY="sk-your-openai-api-key"
litellm --config litellm_config.yaml
Proxy runs at: http://0.0.0.0:4000
curl --location 'http://0.0.0.0:4000/v1/chat/completions' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json' \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'
import openai
client = openai.OpenAI(
api_key="sk-1234", # Any key works
base_url="http://0.0.0.0:4000"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
# Create config file first (see above)
docker run -d \
-v $(pwd)/litellm_config.yaml:/app/config.yaml \
-e OPENAI_API_KEY="sk-your-key" \
-p 4000:4000 \
docker.litellm.ai/berriai/litellm:main-stable \
--config /app/config.yaml
# Basic installation
pip install litellm
# With proxy server
pip install 'litellm[proxy]'
# All extras (development)
pip install -e ".[all]"
# Using Poetry
poetry add litellm
| Image | Use Case |
|---|---|
docker.litellm.ai/berriai/litellm:main-stable |
Standard proxy |
docker.litellm.ai/berriai/litellm:main-latest |
Latest build |
docker.litellm.ai/berriai/litellm-database:main-stable |
With database support |
docker.litellm.ai/berriai/litellm-non_root:main-stable |
Non-root user |
Create litellm_config.yaml:
model_list:
# OpenAI
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_key: os.environ/OPENAI_API_KEY
# Anthropic
- model_name: claude-sonnet
litellm_params:
model: anthropic/claude-sonnet-4-20250514
api_key: os.environ/ANTHROPIC_API_KEY
# Azure OpenAI
- model_name: azure-gpt-4o
litellm_params:
model: azure/gpt-4o
api_base: https://your-endpoint.openai.azure.com/
api_key: os.environ/AZURE_API_KEY
api_version: "2025-01-01-preview"
# Rate limiting
litellm_settings:
set_verbose: true
general_settings:
master_key: sk-1234 # Optional: Master API key
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export AZURE_API_KEY="..."
litellm --config litellm_config.yaml --detailed_debug
# Test OpenAI
curl -X POST 'http://0.0.0.0:4000/v1/chat/completions' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json' \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello from OpenAI!"}]}'
# Test Anthropic
curl -X POST 'http://0.0.0.0:4000/v1/chat/completions' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json' \
-d '{"model": "claude-sonnet", "messages": [{"role": "user", "content": "Hello from Claude!"}]}'
# Test Azure
curl -X POST 'http://0.0.0.0:4000/v1/chat/completions' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json' \
-d '{"model": "azure-gpt-4o", "messages": [{"role": "user", "content": "Hello from Azure!"}]}'