ArgoCD and GitOps Integration¶
CloudTaser is annotation-driven by design. This makes it natively compatible with GitOps workflows: you declare your secret configuration in Git, ArgoCD syncs it, and the CloudTaser operator handles injection at pod creation time. No out-of-band secret management, no build-time dependencies, no sealed envelopes.
The Problem: Secrets in GitOps¶
ArgoCD's number one pain point is secrets. The GitOps model requires that the desired state of a cluster is fully described in Git. But secrets cannot be stored in Git, which creates a gap: every GitOps team needs a secondary mechanism to handle sensitive data.
The common solutions -- Sealed Secrets, External Secrets Operator, SOPS -- each solve this gap differently, but all ultimately create Kubernetes Secrets in etcd. This means the cloud provider (and any party with etcd access) can read them.
CloudTaser eliminates this problem entirely. Secrets never enter Kubernetes at all. They travel directly from an EU-hosted vault into process memory at runtime.
Why CloudTaser Is GitOps-Native¶
CloudTaser configuration lives entirely in pod annotations:
annotations:
cloudtaser.io/inject: "true"
cloudtaser.io/vault-address: "https://vault.eu.example.com"
cloudtaser.io/vault-role: "myapp-prod"
cloudtaser.io/secret-paths: "secret/data/prod/db"
cloudtaser.io/env-map: "password=PGPASSWORD,username=PGUSER"
These annotations are declarative, versionable, and safe to store in Git. They contain no secret values -- only references to vault paths and environment variable mappings. ArgoCD syncs them like any other Kubernetes manifest, and the CloudTaser operator resolves the actual secrets at pod creation time.
No pre-processing. No encryption step. No build pipeline integration. The manifests in Git are the complete source of truth.
Step 1: Deploy the Operator via ArgoCD¶
Create an ArgoCD Application that deploys CloudTaser from its Helm chart:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cloudtaser
namespace: argocd
spec:
project: default
source:
chart: cloudtaser
repoURL: ghcr.io/skipopsltd/cloudtaser-helm
targetRevision: "*"
helm:
values: |
operator:
vaultAddress: "https://vault.eu.example.com"
replicaCount: 3
ha: true
leaderElect: true
ebpf:
enforceMode: true
destination:
server: https://kubernetes.default.svc
namespace: cloudtaser-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Pin the chart version in production
Use a specific targetRevision (e.g., 0.4.18) instead of "*" for production deployments. This prevents unexpected upgrades during automatic syncs.
Step 2: Annotate Workloads in Git¶
In your application repository, add CloudTaser annotations to pod templates:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
template:
metadata:
annotations:
cloudtaser.io/inject: "true"
cloudtaser.io/vault-address: "https://vault.eu.example.com"
cloudtaser.io/vault-role: "myapp-prod"
cloudtaser.io/secret-paths: "secret/data/prod/db"
cloudtaser.io/env-map: "password=PGPASSWORD,api_key=API_KEY"
spec:
containers:
- name: myapp
image: myorg/myapp:v1.2.3
Commit and push. ArgoCD syncs the manifests, Kubernetes creates the pod, the CloudTaser webhook intercepts it, and the wrapper fetches secrets from vault into process memory.
Step 3: ArgoCD Syncs, Webhook Handles Injection¶
The flow during an ArgoCD sync:
- ArgoCD applies the Deployment manifest to the cluster
- Kubernetes scheduler creates a pod from the template
- The CloudTaser mutating webhook intercepts the pod creation
- The webhook injects an init container (copies the wrapper binary) and rewrites the entrypoint
- The wrapper authenticates to vault, fetches secrets, and fork+execs the application
- ArgoCD sees the Deployment as synced and healthy
ArgoCD health checks work normally
The wrapper exposes a health endpoint for Kubernetes liveness and readiness probes. ArgoCD's health assessment uses the standard Kubernetes pod phase and condition checks, which work without modification.
No ArgoCD plugins, no custom resource health checks, no special sync waves for secrets. The webhook operates transparently at the Kubernetes admission layer.
Comparison: Secret Management Approaches for GitOps¶
| Sealed Secrets | External Secrets Operator | SOPS | Vault Agent Sidecar | CloudTaser | |
|---|---|---|---|---|---|
| Approach | Encrypt secrets in Git, decrypt in cluster | Sync secrets from external store to K8s | Encrypt files in Git, decrypt at deploy | Sidecar fetches secrets to files | Webhook injects wrapper, secrets in memory |
| Creates K8s Secret? | Yes | Yes | Yes | No (files on tmpfs) | No |
| Secrets in etcd? | Yes | Yes | Yes | No | No |
| Sidecar container? | No | No | No | Yes | No (wrapper runs as PID 1) |
| Build/deploy dependency? | kubeseal CLI at commit time |
None | SOPS + KMS at deploy time | None | None |
| App code changes? | None | None | None | Yes (read files) | None |
| Cloud provider can read secrets? | Yes (etcd access) | Yes (etcd access) | Yes (etcd access) | Possible (tmpfs on node) | No (memory only, eBPF enforced) |
| GitOps compatible? | Yes | Yes | Yes | Yes | Yes (annotation-driven) |
| Runtime enforcement? | None | None | None | None | eBPF blocks exfiltration |
The etcd gap
Sealed Secrets, ESO, and SOPS all create standard Kubernetes Secrets as their final output. These Secrets are stored in etcd, which is managed by the cloud provider. On GKE, EKS, and AKS, the cloud provider (and potentially US government agencies via CLOUD Act / FISA 702) has access to etcd. CloudTaser closes this gap by never creating Kubernetes Secrets at all.
Migration from Existing Tools¶
If you are currently using Sealed Secrets, ESO, or SOPS with ArgoCD, you can migrate workloads incrementally.
Migration Strategy¶
- Deploy CloudTaser alongside your existing secret management tool
- Import secrets into your EU-hosted vault (if not already there)
- Annotate workloads one at a time -- replace
secretKeyRefwith CloudTaser annotations - Remove the old secret resources (SealedSecret, ExternalSecret, or encrypted SOPS files) from Git
- Delete orphaned Kubernetes Secrets from the cluster
The cloudtaser migrate CLI command can generate migration scripts for each tool. See the Migration Guide for detailed instructions per tool.
Example: Migrating a Single Workload from ESO¶
Before (ESO):
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: myapp-secrets
spec:
secretStoreRef:
name: aws-secretsmanager
kind: SecretStore
target:
name: myapp-secrets
data:
- secretKey: db_password
remoteRef:
key: prod/myapp
property: db_password
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
template:
spec:
containers:
- name: myapp
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: myapp-secrets
key: db_password
After (CloudTaser):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
template:
metadata:
annotations:
cloudtaser.io/inject: "true"
cloudtaser.io/secret-paths: "secret/data/prod/myapp"
cloudtaser.io/env-map: "db_password=PGPASSWORD"
spec:
containers:
- name: myapp
The ExternalSecret CRD and the Kubernetes Secret are both removed. The Deployment is simpler. Secrets no longer touch etcd.
Best Practices¶
Use CloudTaserConfig CRDs for shared configuration
Instead of repeating vault address and role in every Deployment, create a CloudTaserConfig CR per environment and reference it with cloudtaser.io/config: "production". Store the CR in the same Git repo that ArgoCD manages.
Sync the CloudTaserConfig before workloads
If using ArgoCD sync waves, deploy CloudTaserConfig resources in an earlier wave than the workloads that reference them:
Monitor with cloudtaser audit
Run cloudtaser audit as a CronJob or CI step to verify that all workloads in a namespace are CloudTaser-protected and no orphaned Kubernetes Secrets remain in etcd.