Deployment
On-Premise Architecture
Deploy the entire OperativeOps stack on your own infrastructure. Every component — API servers, agent runtime, RAG engine, vector database, and MCP servers — runs inside your network.
- Full data sovereignty — nothing leaves your network
- Same features as cloud with zero external dependencies
- Integrate with your existing monitoring and logging
- Dedicated support team for on-premise installations
- Automated updates with signed release packages
Client Layer
Application Layer
Integration Layer (MCP)
Data Layer
LLM Layer (BYOM)
Bring Your Own Model
LLM Flexibility
No vendor lock-in. Use any LLM provider — or run models locally for complete data sovereignty.
Cloud Providers
OpenAI, Anthropic, Google, Groq, Cohere — use any commercial API with your existing contracts.
Enterprise Cloud
Azure OpenAI, AWS Bedrock, Google Vertex AI — leverage your cloud provider's AI services.
Self-Hosted
Run Llama, Mistral, Qwen, or any open model via Ollama, vLLM, or TGI on your own GPUs.
Per-Agent Config
Assign different models to different agents. Use GPT-4 for strategy, Llama for analytics.
Zero Data Leakage
When self-hosting, no data ever leaves your network. Full air-gap support available.
Hot Swappable
Switch providers without downtime or workflow changes. A/B test models in production.
AES-256 + TLS 1.3
Encryption
6-Role RBAC + SSO
Access Control
GDPR, SOC 2 (in progress)
Compliance
Full audit trails
AI Governance
Security & Compliance
Enterprise-Grade Security
OperativeOps is built from the ground up for enterprise security requirements. From encryption and RBAC to audit trails and prompt injection prevention.
- AES-256 encryption at rest, TLS 1.3 in transit
- 6-role RBAC with SSO/SAML integration
- GDPR compliant with DPA available
- SOC 2 Type II certification in progress
- Full audit trails and AI governance dashboard
- Prompt injection prevention and PII redaction
Extensibility
Custom Integrations via MCP SDK
Connect OperativeOps to any internal system using the Model Context Protocol SDK. Define custom tools, data sources, and workflows that your AI agents can leverage.
- TypeScript, Python, and Go SDKs
- Connect internal APIs, databases, and services
- Define custom tools agents can invoke
- Webhook-based event streaming
- Full documentation and examples
Deployment
Deploy Your Way
From a single command to a full Kubernetes cluster — choose the deployment model that matches your scale and requirements.
Docker Compose
Single-node deployment
Spin up the entire OperativeOps stack with a single docker compose up command. Ideal for evaluations, small teams, and development environments.
$ docker compose -f operativeops.yml up -d- One-command setup
- Includes all services
- Auto-configured networking
- Persistent volumes for data
Kubernetes
Multi-node cluster deployment
Helm charts for production-grade Kubernetes deployments. Supports auto-scaling, rolling updates, and integration with your existing observability stack.
$ helm install operativeops oci://registry.operativeops.com/charts/operativeops- Horizontal auto-scaling
- Rolling updates with zero downtime
- Prometheus / Grafana metrics
- PodDisruptionBudgets included