Agenta stores prompts, evaluation datasets, traces, and model/provider credentials. Harden workspace access, protect experiment data, and isolate model gateway secrets.
¶ 1) Protect workspace and API access
- Enable authentication - Configure SSO or strong local admin credentials for Agenta operators
- Disable public access - Disable public or shared demo access on production instances
- Role separation - Separate admin and evaluator roles to reduce accidental data exposure
- Rotate API tokens - Regularly rotate API tokens used by CI or evaluation pipelines
- Limit network exposure - Bind to localhost or private interface; use reverse proxy for external access
¶ 2) Secure model and provider credentials
- Use secret storage - Store OpenAI, Anthropic, and other provider keys in secret storage (Vault, AWS Secrets Manager), not in Git or notebooks
- Limit outbound access - Use firewall rules to limit outbound access to approved model endpoints only
- Environment isolation - Isolate staging and production credentials to prevent cross-environment leakage
- Redact traces - Redact prompt traces before exporting results to external systems
- API key rotation - Implement regular API key rotation schedule
¶ 3) Harden telemetry, traces, and datasets
- Treat evaluation data as sensitive - Apply retention policy to evaluation datasets
- Restrict trace access - Limit access to trace payloads that may contain customer prompts or responses
- Encrypt backups - Encrypt backups for prompt repositories and evaluation history
- Enable audit logging - Log role changes, token creation, and dataset imports
- Disable telemetry if needed - Set
AGENTA_TELEMETRY_ENABLED=false for air-gapped or sensitive environments
¶ 4) Docker and container security
- Run as non-root - Ensure containers run as non-root user where possible
- Read-only filesystem - Use read-only root filesystem for containers where applicable
- Drop capabilities - Drop unnecessary Linux capabilities from containers
- Resource limits - Set CPU and memory limits to prevent denial-of-service conditions
- Network isolation - Use Docker networks to isolate Agenta services from other containers
Agenta has disclosed and patched the following high-severity vulnerabilities:
| Advisory |
CVE |
Severity |
Affected |
Patched |
Description |
| GHSA-cfr2-mp74-3763 |
CVE-2026-27961 |
High (8.8) |
≤ 0.86.7 |
0.86.8+ |
Server-Side Template Injection (SSTI) via Jinja2 templates allowing RCE |
| GHSA-pmgp-2m3v-34mq |
CVE-2026-27952 |
High (8.8) |
< 0.48.1 |
0.48.1+ |
Python sandbox escape via numpy allowing RCE |
Impact: Remote Code Execution (RCE) on the API server by any authenticated user.
Attack Vector: Malicious Jinja2 templates in custom evaluator configurations can execute arbitrary Python code in the API server process.
Mitigation:
- Upgrade to version 0.86.8 or later immediately
- Avoid using
template_format="jinja2" in evaluator configurations until patched
- Review container logs for suspicious activity if running affected versions
- Rotate any secrets that may have been exposed
¶ CVE-2026-27952: Python Sandbox Escape
Impact: Remote Code Execution (RCE) on the API server by any authenticated user.
Attack Vector: RestrictedPython sandbox incorrectly whitelisted numpy package, allowing access to sys.modules and arbitrary code execution via numpy.ma.core.inspect.
Mitigation:
- Upgrade to version 0.48.1 or later (sandbox removed entirely in v0.60+)
- Rotate environment variables, API keys, and secrets if running affected versions
- Audit deployment for signs of exploitation
To report security issues:
- Contact: security@agenta.ai
- Do NOT report via public GitHub issues
- Response: Acknowledgement within 3 business days, critical fixes within 30 days
- Agenta documentation: https://docs.agenta.ai/
- Agenta source repository: https://github.com/Agenta-AI/agenta
- Agenta security advisories: https://github.com/Agenta-AI/agenta/security
Any questions?
Feel free to contact us. Find all contact information on our contact page.