Agenta is an open-source LLMOps platform for building production-grade LLM applications. It helps engineering and product teams create reliable LLM apps faster through integrated prompt management, evaluation, and observability. Agenta is designed to run on your own infrastructure so you can keep test data, prompts, and traces private.
Official Website: https://www.agenta.ai/
- Interactive LLM Playground - Compare prompts side-by-side against test cases
- Multi-Model Support - Experiment with 50+ LLM models or bring-your-own models
- Version Control - Version prompts and configurations with branching and environments
- Complex Configurations - Enable SMEs to collaborate on complex configuration schemas
- Flexible Testsets - Create testcases from production data, playground experiments, or CSV uploads
- Pre-built and Custom Evaluators - LLM-as-judge, 20+ pre-built evaluators, or custom evaluators
- UI and API Access - Run evaluations via UI (for SMEs) or programmatically (for engineers)
- Human Feedback Integration - Collect and incorporate expert annotations
- Cost & Performance Tracking - Monitor spending, latency, and usage patterns
- LLM Tracing - Debug complex workflows with detailed traces
- Open Standards - OpenTelemetry native tracing (compatible with OpenLLMetry and OpenInference)
- Integrations - Pre-built integrations for most models and frameworks
- Centralized Platform - Keep prompts, evaluations, and traces in one place
- UI for Domain Experts - Enable non-technical experts to edit and experiment with prompts
- Full API and UI Parity - Integrate programmatic and UI workflows into one central hub
- Prompt testing and iteration with version control
- LLM evaluation pipelines with automated and human evaluators
- RAG application monitoring and debugging
- Multi-model comparison and vendor selection
- Production LLM observability and cost tracking
- Team collaboration on prompt engineering
| Component |
Technology |
| Primary Languages |
TypeScript (54.9%), Python (44.3%) |
| Architecture |
Multi-service Docker deployment |
| Integrations |
LangChain, LlamaIndex, OpenAI, any model provider |
| Tracing |
OpenTelemetry (OpenLLMetry, OpenInference compatible) |
- MIT Expat license for core software
- Enterprise Edition components (in
ee/ directories) have separate license
- Third-party components under their original licenses
- Active open-source project with 3.9k+ GitHub stars
- Latest release: v0.92.1 (March 2026)
- 83+ contributors, 16,840+ commits
- Both self-hosted and cloud (SaaS) deployment options available
- Free tier available on Agenta Cloud (no credit card required)
¶ History and References
Any questions?
Feel free to contact us. Find all contact information on our contact page.