AnythingLLM (v1.11.2) is a self-hosted, full-stack AI application developed by Mintplex Labs that combines document chat, AI agents, and configurable model backends in a single interface. With over 56.6k GitHub stars, it’s one of the most popular open-source RAG platforms. It supports multiple LLMs and vector stores, making it ideal for internal knowledge bases and private RAG deployments. The Docker version supports multi-user access with role-based permissions, while the desktop app provides single-user functionality.
License: MIT (open-source)
GitHub: Mintplex-Labs/anything-llm
- Built-in RAG - Turn documents into context for any LLM
- AI Agents - Autonomous AI capabilities with no-code agent builder
- MCP Compatible - Full Model Context Protocol support
- Multi-user Mode - Role-based access control (Docker version)
- Multi-modal Support - Images, PDFs, and documents
- Embeddable Chat Widgets - Deploy chat on your website
- Document Ingestion - PDF, TXT, DOCX, MD, and more
- Agent Tools - Web browsing, code execution, API calls
- Chat Modes - Different interaction modes for various use cases
- API Access - Full developer API for integrations
- Desktop App - Native applications for macOS, Windows, and Linux
Note: AnythingLLM supports 33+ LLM providers. Below are the most popular options.
- OpenAI - GPT-4, GPT-4o, GPT-3.5-turbo
- Anthropic - Claude 3.5, Claude 3
- Google Gemini - Gemini Pro, Gemini Ultra
- Azure OpenAI - Enterprise Azure deployment
- AWS Bedrock - Amazon’s managed LLM service
- Groq - Fast inference with LPU
- Mistral AI - Mistral, Mixtral models
- Cohere - Command, Command-R models
- Perplexity AI - Search-enhanced models
- Together AI - Open-source model hosting
- OpenRouter - Unified API for multiple providers
- DeepSeek - DeepSeek-V2, DeepSeek-Coder
- xAI - Grok models
- NVIDIA NIM - NVIDIA inference platform
- Fireworks AI - Fast inference service
- Z.AI - Zhipu AI models
- Ollama - Local model serving
- LM Studio - Desktop model server
- LocalAI - Self-hosted OpenAI-compatible API
- KoboldCPP - Local inference engine
- Text Generation Web UI - Oobabooga web interface
- LiteLLM - Unified LLM API proxy
- LanceDB - Default, serverless vector DB
- Chroma - Popular open-source vector DB
- Milvus - Scalable vector database
- PGVector - PostgreSQL vector extension
- Astra DB - DataStax vector database
- Pinecone - Managed vector database
- Qdrant - Vector similarity search engine
- Weaviate - GraphQL-enabled vector DB
- Zilliz - Managed Milvus service
- Internal Knowledge Base - Chat with company documents
- Private RAG Applications - Keep data within your security boundary
- Team AI Assistant - Multi-user access with permissions
- Customer Support Chatbots - Embeddable widgets
- Research Assistant - Document analysis and summarization
- Code Documentation - Chat with codebases
- Backend: Node.js
- Frontend: React
- Language: JavaScript (98.3%), CSS (1.4%), Dockerfile (0.2%), HTML (0.1%)
- Database: SQLite (default), PostgreSQL (optional)
- Vector DB: LanceDB (default), Chroma, Milvus, Pinecone, Qdrant, Weaviate
- Deployment: Docker, Desktop App (macOS/Windows/Linux)
| Component |
Minimum |
Recommended |
| CPU |
2 cores |
4+ cores |
| RAM |
4GB |
8GB+ |
| Disk |
10GB |
50GB+ (for local models) |
| Network |
Required for cloud LLMs |
Optional for local LLMs |
- ✅ Open-source and self-hosted
- ✅ MIT License
- ✅ Active development (v1.11.2 - March 18, 2026)
- ✅ 56.6k+ GitHub stars, 6.1k+ forks
- ✅ Multi-user mode available (Docker version)
- ⚠️ Requires
SYS_ADMIN capability for webpage scraping
- ⚠️ 13 security advisories disclosed (Jan 2024 - Mar 2026) - keep updated to v1.11.2+
- Critical: CVE-2026-32626 / GHSA-rrmw-2j6x-4mf2 (XSS to RCE via LLM injection, Mar 2026)
- High: CVE-2026-24477, CVE-2026-24478, GHSA-jwjx-mw2p-5wc7, GHSA-24qj-pw4h-3jmm, GHSA-7hpg-6pc7-cx86
- Moderate: GHSA-2qmm-82f7-8qj5, GHSA-rh66-4w74-cf4m, GHSA-p5rf-8p88-979c, GHSA-47vr-w3vm-69ch, GHSA-xmj6-g32r-fc5q
- Low: GHSA-wfq3-65gm-3g2p, GHSA-7754-8jcc-2rg3
¶ History and References
Any questions?
Feel free to contact us. Find all contact information on our contact page.