Champ AI VPC Deployment runs workflow execution entirely inside your cloud account. Your PHI, prompts, screenshots, API responses, and workflow data never leave your VPC. Champ’s control plane handles the workflow designer, orchestration, and run metadata — the parts that don’t touch your data.
Architecture at a glance
Champ’s control plane (SaaS) handles the workflow designer, user auth, case records (IDs, status, metadata), run orchestration, and run metadata (IDs, statuses, timings, error classes).
Your VPC (data plane) handles workflow run execution, browser / code / LLM sandboxes, API connectors, and all run data and artifacts.
A lightweight worker deployed in your VPC opens an outbound mTLS connection to Champ’s control plane, pulls work, executes it locally, and returns metadata only. Champ never initiates an inbound connection — no VPN, no PrivateLink, no firewall changes.
How a run flows
- A user or trigger starts a workflow run from Champ’s control plane.
- The worker in your VPC polls over a single outbound mTLS channel.
- The worker pulls the workflow definition and run ID.
- For each node, the worker spins up a stateless sandbox — browser, code execution, or LLM call — inside your VPC.
- Sandboxes reach private APIs, EHRs, and databases over your existing network.
- Node results and artifacts are persisted to your managed Postgres.
- Only run metadata (status, timings, error classes) flows back to the control plane.
What stays in your VPC
- Protected health information (PHI) and sensitive business data
- Workflow run execution itself
- LLM prompts and responses
- Browser screenshots and DOM replays
- Tool outputs, API responses, connector results
- Secrets and credentials
- Run-time state: inputs, intermediate values, node results, artifacts
What Champ holds on its side
- Workflow definitions you author
- Case records — case ID, status, case type, references to external records, workflow associations
- Run IDs and node IDs
- Per-step statuses, durations, error classes, retry counts
The metadata channel is schema-enforced on the wire — payload fields are structurally impossible to transmit.
Stateless sandboxes
Every workflow node executes inside a sandbox that lives only for the duration of that node. Three types:
| Sandbox | Purpose |
|---|
| Browser | Headless browser sessions for web workflows — form fills, navigation, DOM interaction, screenshots. |
| Code execution | Short-lived runtime (Python / JS) for data transformation and custom logic. |
| LLM calls | Outbound calls to your chosen LLM provider using your credentials. Prompts and responses never touch Champ. |
Sandboxes are stateless and ephemeral: no state persists between runs, and they are torn down as soon as the node completes. All durable state lives in your managed Postgres.
Deployment options
| Option | Best for |
|---|
| Helm chart on Kubernetes (recommended) | Production deployments on EKS, GKE, AKS, or on-prem Kubernetes |
| Docker Compose | Proof-of-concept, single-engineer trials, non-production evaluations |
The Helm chart provides high availability (multi-replica, multi-AZ), horizontal autoscaling (worker pods scale on queue depth), clean upgrades and rollbacks, and integrates with standard Kubernetes security and observability tooling.
Infrastructure you provide
Champ AI runs on infrastructure you already operate under your existing compliance posture. The required footprint is intentionally minimal:
| Component | Service |
|---|
| Kubernetes cluster | EKS / GKE / AKS / on-prem |
| PostgreSQL 15+ | RDS / Cloud SQL / Aurora / Azure Database |
Postgres is the only stateful dependency — it holds run state, the job queue, and run artifacts (screenshots, browser replays). No Redis, no object store, no external secret manager is required. Minimum sizing, network policies, IAM roles, and required Postgres extensions are documented in the deployment guide.
Your choices
LLM provider (BYO key). You bring your own LLM credentials from one of Anthropic, OpenAI, or Google. Champ’s workers call the provider directly from inside your VPC using your keys. Prompts and responses never transit Champ’s infrastructure.
Connectors and APIs. External providers (EHRs, internal services, databases) are reached over your existing private network. Champ never routes this traffic.
Artifact handling. You define retention, encryption, and access policies for screenshots, browser replays, and tool outputs.
VPC management. Optionally, Champ can operate the VPC deployment on your behalf under scoped access controls, minimizing your engineering team’s involvement.
Security model
- mTLS between the in-VPC worker and Champ’s control plane
- Outbound-only traffic — no inbound connections from Champ
- All customer data encrypted at rest in your managed Postgres
- Workflow sandboxes are ephemeral and private-subnet
- Secrets remain in your existing secret manager (AWS Secrets Manager, Vault, etc.)
Compliance posture
Because all data processing occurs inside your VPC using your existing compliance-reviewed infrastructure (managed Postgres, KMS, IAM), Champ AI VPC Deployment inherits your HIPAA / SOC 2 / HITRUST controls for data handling. Champ’s BAA covers the control-plane metadata only.
Why choose VPC deployment
- Data never leaves your boundary. PHI, prompts, screenshots, and API responses stay under your encryption and access controls.
- No inbound connectivity required. Outbound mTLS only — no VPN, PrivateLink, or firewall changes to onboard.
- Inherit your compliance posture. Runs on your audited, BAA-covered infrastructure.
- BYO LLM keys. Your Anthropic / OpenAI / Google account, your rate limits, your audit trail.
- Single stateful dependency. Postgres only — no Redis, no object store, no external secret manager.