Skip to main content

Cluster-wide wildcard TLS trust

info

Use this runbook to roll out cluster-wide trust for your wildcard certificate using cert-manager trust-manager and Gatekeeper. The wildcard private key stays in a single trust namespace; only the public certificate is distributed to trust: enabled namespaces.

Overview

Goal:

  • Keep wildcard-reids-tls (cert + key) as a single canonical Secret in the cert-manager namespace, managed by Flux + SOPS.
  • Use trust-manager to create a ConfigMap bundle with the public certificate in every namespace labelled trust: enabled.
  • Use Gatekeeper mutation to:
    • Add a wildcard-reids-bundle volume backed by that ConfigMap.
    • Mount /etc/ssl/certs/wildcard-reids.crt into every container in trusted namespaces.
  • Let apps (for example ZITADEL) opt-in by setting SSL_CERT_FILE=/etc/ssl/certs/wildcard-reids.crt.

This gives you:

  • Centralised management of the wildcard cert under GitOps.
  • Automatic propagation to trusted namespaces.
  • No distribution of the wildcard private key.

Architecture

  • Namespace infra-trust
    • Hosts the Gatekeeper Helm release and mutation policies (via Flux).
  • Namespace cert-manager
    • Runs cert-manager and trust-manager.
    • Holds the canonical wildcard-reids-tls Secret used by the Bundle as its source.
  • Any namespace with metadata.labels.trust=enabled
    • Gets a ConfigMap with wildcard-reids.crt injected by trust-manager.
    • Gets a wildcard-reids-bundle volume and volumeMount injected by Gatekeeper into all Pods.

Prerequisites

cert-manager and trust-manager

Confirm that cert-manager and trust-manager are installed:

kubectl get deploy -n cert-manager
kubectl get crd bundles.trust.cert-manager.io
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
cert-manager 1/1 1 1 144d
cert-manager-cainjector 1/1 1 1 144d
cert-manager-webhook 1/1 1 1 144d
trust-manager 1/1 1 1 144d

NAME CREATED AT
bundles.trust.cert-manager.io 2025-07-01T13:13:48Z

Both should show Ready resources, and the Bundle CRD must exist.

Flux managing cluster config

Confirm that Flux is managing the cluster config:

flux get kustomizations -n flux-system

You should see something like:

NAME                         REVISION                 SUSPENDED   READY   MESSAGE
app-source-kustomization main@sha1:2c4e91b2 False True Applied revision: main@sha1:2c4e91b2
blaster-dev develop@sha1:c397d274 False True Applied revision: develop@sha1:c397d274
blaster-prod main@sha1:5a9367e5 False True Applied revision: main@sha1:5a9367e5
cloudflare-app main@sha1:af552de2 False True Applied revision: main@sha1:af552de2
cloudflare-ns main@sha1:25c6e1cb False True Applied revision: main@sha1:25c6e1cb
coach-app-dev dev@sha1:edd43eb1 False True Applied revision: dev@sha1:edd43eb1
flux-system main@sha1:25c6e1cb False True Applied revision: main@sha1:25c6e1cb
identity-internal main@sha1:8a0f69cf False True Applied revision: main@sha1:8a0f69cf
origin-ca-issuer-controller v0.12.1@sha1:86d908ed False True Applied revision: v0.12.1@sha1:86d908ed
origin-ca-issuer-crds v0.12.1@sha1:86d908ed False True Applied revision: v0.12.1@sha1:86d908ed
origin-ca-issuer-ns main@sha1:25c6e1cb False True Applied revision: main@sha1:25c6e1cb
origin-ca-issuer-rbac v0.12.1@sha1:86d908ed False True Applied revision: v0.12.1@sha1:86d908ed

Also confirm the sources:

flux get sources git -n flux-system

You should see:

NAME                       REVISION                 SUSPENDED   READY   MESSAGE
app-manifests main@sha1:2c4e91b2 False True stored artifact for revision 'main@sha1:2c4e91b2'
blaster-dev develop@sha1:c397d274 False True stored artifact for revision 'develop@sha1:c397d274'
blaster-prod main@sha1:5a9367e5 False True stored artifact for revision 'main@sha1:5a9367e5'
cloudflare-app main@sha1:af552de2 False True stored artifact for revision 'main@sha1:af552de2'
coach-app-dev dev@sha1:edd43eb1 False True stored artifact for revision 'dev@sha1:edd43eb1'
flux-system main@sha1:25c6e1cb False True stored artifact for revision 'main@sha1:25c6e1cb'
identity-internal main@sha1:8a0f69cf False True stored artifact for revision 'main@sha1:8a0f69cf'
origin-ca-issuer-upstream v0.12.1@sha1:86d908ed False True stored artifact for revision 'v0.12.1@sha1:86d908ed'

Add infra-trust to Flux

info

With trust-manager, the “one true copy” of the wildcard secret must live in the trust namespace that trust-manager is configured to read from (for you: cert-manager). The Bundle cannot point at a Secret in another namespace.

Structure

tree -a -L 7 -I '.git|.DS_Store|node_modules|.next|dist'
├── .sops.yaml
└── clusters
└── my-cluster
├── cert-manager
│   ├── 10-wildcard-reids-tls-secret.enc.yaml
│   ├── cert-wildcard-reids-bundle.yaml
│   └── kustomization.yaml
├── cert-manager.yaml
├── infra-trust
│   ├── gatekeeper
│   │   ├── 00-namespace.yaml
│   │   ├── 20-gatekeeper-helm-repo.yaml
│   │   ├── 30-gatekeeper-helm-release.yaml
│   │   └── kustomization.yaml
│   ├── infra-trust-gatekeeper-policies.yaml
│   ├── infra-trust-gatekeeper.yaml
│   ├── kustomization.yaml
│   └── policies
│   ├── 40-assign-wildcard-reids-volume.yaml
│   ├── 50-assign-wildcard-reids-volumemount.yaml
│   └── kustomization.yaml
└── kustomization.yaml

Create infra-trust and sub directories

cd ~/Projects/flux-config
git pull
mkdir -p clusters/my-cluster/infra-trust/gatekeeper
mkdir -p clusters/my-cluster/infra-trust/policies

Flux Kustomization for infra-trust

Ensure infra-trust has a Kustomization in clusters/my-cluster/infra-trust/kustomization.yaml:

# clusters/my-cluster/infra-trust/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
# These two are Flux Kustomization CRDs that will be applied to the cluster
- infra-trust-gatekeeper.yaml
- infra-trust-gatekeeper-policies.yaml

Flux Kustomizations with dependsOn

Configure two Flux Kustomization CRs and make one depend on the other.

# clusters/my-cluster/infra-trust/infra-trust-gatekeeper.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: infra-trust-gatekeeper
namespace: flux-system
spec:
interval: 5m0s
timeout: 2m0s
retryInterval: 30s

# This is the path *inside the git repo*
path: ./clusters/my-cluster/infra-trust/gatekeeper

# Apply into this namespace in the cluster
targetNamespace: infra-trust

prune: true
sourceRef:
kind: GitRepository
name: flux-system
namespace: flux-system

# Tell Flux how to decrypt SOPS-encrypted manifests in this path
decryption:
provider: sops
secretRef:
name: sops-age

force: false
# clusters/my-cluster/infra-trust/infra-trust-gatekeeper-policies.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: infra-trust-gatekeeper-policies
namespace: flux-system
spec:
interval: 5m0s
timeout: 2m0s
retryInterval: 30s
path: ./clusters/my-cluster/infra-trust/policies
prune: true
sourceRef:
kind: GitRepository
name: flux-system
namespace: flux-system
dependsOn:
- name: infra-trust-gatekeeper
force: false
  • infra-trust-gatekeeper installs CRDs (via the HelmRelease).
  • infra-trust-gatekeeper-policies will not run until infra-trust-gatekeeper is Ready, because of dependsOn.

Kustomize under gatekeeper and policies

# clusters/my-cluster/infra-trust/gatekeeper/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- 00-namespace.yaml
- 20-gatekeeper-helm-repo.yaml
- 30-gatekeeper-helm-release.yaml
# clusters/my-cluster/infra-trust/policies/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- 40-assign-wildcard-reids-volume.yaml
- 50-assign-wildcard-reids-volumemount.yaml

infra-trust namespace managed by Flux

Create a Namespace manifest in Git rather than via kubectl create namespace:

# clusters/my-cluster/infra-trust/gatekeeper/00-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: infra-trust
labels:
role: infra

Wildcard TLS secret managed by SOPS

Create a SOPS-encrypted TLS secret in cert-manager.

note

Flux already has a .sops.yaml in the directory root.

# .sops.yaml
creation_rules:
- path_regex: 'clusters/my-cluster/.*/(secrets/)?.*\.yaml$'
encrypted_regex: '^(data|stringData)$'
age: ['age...']

Generate the plain manifest from the real files and pipe through SOPS. From your workstation (adjust paths):

  1. Create a TLS secret manifest from your existing files:
kubectl create secret tls wildcard-reids-tls   --namespace cert-manager   --cert=PATH/NAME.crt  --key=PATH/NAME.key   --dry-run=client -o yaml > clusters/my-cluster/cert-manager/10-wildcard-reids-tls-secret.enc.yaml
  1. Encrypt it with SOPS:
sops --encrypt --in-place clusters/my-cluster/cert-manager/10-wildcard-reids-tls-secret.enc.yaml

Gatekeeper via Flux in infra-trust

info

Use dependancies to prevent a race condition where CRD's are missing but can't be installed.

Gatekeeper HelmRepository

# clusters/my-cluster/infra-trust/gatekeeper/20-gatekeeper-helm-repo.yaml
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: gatekeeper
namespace: infra-trust
spec:
interval: 1h
url: https://open-policy-agent.github.io/gatekeeper/charts

Gatekeeper HelmRelease

# clusters/my-cluster/infra-trust/gatekeeper/30-gatekeeper-helm-release.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: gatekeeper
namespace: infra-trust
spec:
interval: 30m
chart:
spec:
chart: gatekeeper
version: "3.20.1"
sourceRef:
kind: HelmRepository
name: gatekeeper
namespace: infra-trust
interval: 12h

install:
createNamespace: false
remediation:
retries: 3

upgrade:
remediation:
retries: 3

# Minimal values – let the chart install with defaults first
values:
enableCRDs: true

Gatekeeper mutations for wildcard bundle

Assign volume

# clusters/my-cluster/infra-trust/policies/40-assign-wildcard-reids-volume.yaml
apiVersion: mutations.gatekeeper.sh/v1
kind: Assign
metadata:
name: wildcard-reids-volume
spec:
applyTo:
- groups: [""]
kinds: ["Pod"]
versions: ["v1"]
match:
scope: Namespaced
kinds:
- apiGroups: ["*"]
kinds: ["Pod"]
namespaceSelector:
matchLabels:
trust: enabled
excludedNamespaces:
- kube-system
- cert-manager
- infra-trust
- public
location: "spec.volumes[name:wildcard-reids-bundle]"
parameters:
assign:
value:
name: wildcard-reids-bundle
configMap:
name: wildcard-reids-bundle
defaultMode: 0644
optional: false
items:
- key: wildcard-reids.crt
path: wildcard-reids.crt

Assign volumeMount

# clusters/my-cluster/infra-trust/policies/50-assign-wildcard-reids-volumemount.yaml
apiVersion: mutations.gatekeeper.sh/v1
kind: Assign
metadata:
name: wildcard-reids-volumemount
spec:
applyTo:
- groups: [""]
kinds: ["Pod"]
versions: ["v1"]
match:
scope: Namespaced
kinds:
- apiGroups: ["*"]
kinds: ["Pod"]
namespaceSelector:
matchLabels:
trust: enabled
excludedNamespaces:
- kube-system
- cert-manager
- infra-trust
- public
location: "spec.containers[name:*].volumeMounts[name:wildcard-reids-bundle]"
parameters:
assign:
value:
name: wildcard-reids-bundle
mountPath: /etc/ssl/certs/wildcard-reids.crt
subPath: wildcard-reids.crt
readOnly: true

After Flux reconciliation, any Pod in a trusted namespace will see:

  • wildcard-reids-bundle volume.
  • /etc/ssl/certs/wildcard-reids.crt file in each container.

Wildcard bundle distribution via trust-manager

Create cert-manager directory in Git

mkdir -p clusters/my-cluster/cert-manager

cert-manager Kustomization

# clusters/my-cluster/cert-manager/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
# Canonical wildcard TLS secret, SOPS-encrypted
- 10-wildcard-reids-tls-secret.enc.yaml
# Trust bundle for wildcard-reids
- cert-wildcard-reids-bundle.yaml

Wildcard bundle Bundle CR

Use trust-manager to distribute only the public certificate to all trust: enabled namespaces:

# clusters/my-cluster/cert-manager/cert-wildcard-reids-bundle.yaml
apiVersion: trust.cert-manager.io/v1alpha1
kind: Bundle
metadata:
name: wildcard-reids-bundle
namespace: cert-manager
spec:
sources:
- secret:
name: wildcard-reids-tls
key: tls.crt
target:
configMap:
key: "wildcard-reids.crt"
namespaceSelector:
matchLabels:
trust: enabled

Configure infra-trust and cert-manager into the cluster Kustomization

Ensure infra-trust and cert-manager are configured into your top-level clusters/my-cluster/kustomization.yaml:

# clusters/my-cluster/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- cert-manager.yaml
- ./flux-system
- ./infra-trust
- ./cert-manager

Test in apps namespace first

Add label to apps namespace.

# clusters/my-cluster/apps/00-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: apps
labels:
name: apps
trust: enabled
role: app

Reconcile Flux to take effect

Commit and push, then force Flux to reconcile:

git add .
git commit -m "Added infra-trust and cert-manager to flux and trust to apps namespace"
git push
flux reconcile kustomization flux-system --with-source
flux reconcile kustomization flux-system -n flux-system --with-source
flux reconcile kustomization cert-manager -n flux-system --with-source
flux reconcile kustomization infra-trust-gatekeeper -n flux-system --with-source
flux reconcile kustomization infra-trust-gatekeeper-policies -n flux-system --with-source
flux reconcile kustomization app-source-kustomization -n flux-system --with-source
kubectl -n flux-system describe kustomization infra-trust-gatekeeper
kubectl -n flux-system describe kustomization infra-trust-gatekeeper-policies
kubectl -n infra-trust get all
kubectl -n infra-trust get bundle,wildcardreidsbundle,secret,cm
kubectl -n flux-system describe kustomization infra-trust-gatekeeper
kubectl -n flux-system describe kustomization infra-trust-gatekeeper-policies
kubectl get crd | grep -i gatekeeper
kubectl get crd | grep -i mutations.gatekeeper

kubectl -n infra-trust get helmrelease,helmrepository
kubectl -n infra-trust get secret wildcard-reids-tls
kubectl -n infra-trust get all
NAME                                                 READY   STATUS    RESTARTS   AGE
pod/gatekeeper-audit-7dd6cf86dd-sndgg 1/1 Running 0 88s
pod/gatekeeper-controller-manager-7875d496cc-gn2rw 1/1 Running 0 88s
pod/gatekeeper-controller-manager-7875d496cc-lfrhs 1/1 Running 0 88s
pod/gatekeeper-controller-manager-7875d496cc-m5mgw 1/1 Running 0 88s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gatekeeper-webhook-service ClusterIP 10.50.109.43 <none> 443/TCP 88s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/gatekeeper-audit 1/1 1 1 88s
deployment.apps/gatekeeper-controller-manager 3/3 3 3 88s

NAME DESIRED CURRENT READY AGE
replicaset.apps/gatekeeper-audit-7dd6cf86dd 1 1 1 88s
replicaset.apps/gatekeeper-controller-manager-7875d496cc 3 3 3 88s
kubectl get bundle wildcard-reids-bundle -n cert-manager -o yaml
apiVersion: trust.cert-manager.io/v1alpha1
kind: Bundle
metadata:
creationTimestamp: "2025-11-23T05:49:27Z"
generation: 1
labels:
kustomize.toolkit.fluxcd.io/name: cert-manager
kustomize.toolkit.fluxcd.io/namespace: flux-system
name: wildcard-reids-bundle
resourceVersion: "55532708"
uid: feda1555-9a57-4390-a681-535588401788
spec:
sources:
- secret:
key: tls.crt
name: wildcard-reids-tls
target:
configMap:
key: wildcard-reids.crt
namespaceSelector:
matchLabels:
trust: enabled
status:
conditions:
- lastTransitionTime: "2025-11-23T05:49:27Z"
message: 'Successfully synced Bundle to namespaces that match this label selector:
trust=enabled'
observedGeneration: 1
reason: Synced
status: "True"
type: Synced
kubectl -n cert-manager get bundle
kubectl -n cert-manager describe bundle wildcard-reids-bundle
kubectl -n cert-manager get secret wildcard-reids-tls
kubectl get configmap -A | grep wildcard-reids-bundle || true
kubectl get ns -L trust

Validation and checks

Reduce wait time by aligning the cluster with Flux by enabling trust for the apps namespace.

kubectl label ns apps trust=enabled --overwrite
kubectl get ns -L trust
NAME                   STATUS   AGE     TRUST
apps Active 144d enabled
kubectl get configmap -A | grep wildcard-reids-bundle || true
apps                   wildcard-reids-bundle                                  1      94s

Check Gatekeeper is running

kubectl get pods -n infra-trust -l app=gatekeeper
NAME                                             READY   STATUS    RESTARTS   AGE
gatekeeper-audit-7dd6cf86dd-sndgg 1/1 Running 0 175m
gatekeeper-controller-manager-7875d496cc-gn2rw 1/1 Running 0 175m
gatekeeper-controller-manager-7875d496cc-lfrhs 1/1 Running 0 175m
gatekeeper-controller-manager-7875d496cc-m5mgw 1/1 Running 0 175m
kubectl get assign,assignmetadata -A
NAME                                                        AGE
assign.mutations.gatekeeper.sh/wildcard-reids-volume 3h2m
assign.mutations.gatekeeper.sh/wildcard-reids-volumemount 3h2m
kubectl get assign.mutations.gatekeeper.sh -A
NAME                         AGE
wildcard-reids-volume 3h3m
wildcard-reids-volumemount 3h3m

You should see Gatekeeper controller and audit pods running, and the Assign objects present.

Confirm the trust label is present

Confirm the trust label is present in apps:

kubectl get ns apps -o jsonpath='{.metadata.labels.trust}'; echo
enabled

Confirm the bundle ConfigMap is present

kubectl get configmap wildcard-reids-bundle -n apps
NAME                    DATA   AGE
wildcard-reids-bundle 1 122m

Test outbound TLS from the test pod

Start a test pod and install openssl and openldap-clients inside the pod and test:

kubectl -n apps run test-tls \
--rm -i --tty \
--image=alpine:3.20 \
--restart=Never \
-- sh -c "apk add --no-cache openldap-clients openssl curl && sh"
  • --rm → delete the pod as soon as the shell exits.
  • --restart=Never → run it as a one-shot pod, not a Deployment.
  • apk add --no-cache openldap-clients → installs ldapsearch inside the temporary pod.

Check cert has been mounted

  1. Check the file exists and permissions
ls -l /etc/ssl/certs | grep wildcard
ls -l /etc/ssl/certs/wildcard-reids.crt
-rw-r--r--    1 root     root          7256 Nov 23 09:22 wildcard-reids.crt
-rw-r--r-- 1 root root 7256 Nov 23 09:22 /etc/ssl/certs/wildcard-reids.crt
  1. Inspect the cert to confirm subject and issuer
openssl x509 -in /etc/ssl/certs/wildcard-reids.crt -noout -subject -issuer
subject=CN=*.reids.net.au
  1. Confirm it is actually mounted from a volume
grep wildcard-reids /proc/mounts || grep wildcard /proc/mounts
/dev/mapper/ubuntu--vg-ubuntu--lv /etc/ssl/certs/wildcard-reids.crt ext4 ro,relatime 0 0

Curl test

info

Should return SSL certificate verify ok.

curl -v --cacert /etc/ssl/certs/wildcard-reids.crt https://nas.reids.net.au
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/certs/wildcard-reids.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / x25519 / RSASSA-PSS
* ALPN: server accepted h2
* Server certificate:
* subject: CN=*.reids.net.au
* subjectAltName: host "nas.reids.net.au" matched cert's "*.reids.net.au"
* SSL certificate verify ok.

In the Alpine-ish shell curl exhibits this behaviour:

  • The image has the ca-certificates package installed.
  • curl is compiled/packaged to use the system CA bundle in /etc/ssl/cert.pem and /etc/ssl/certs.
  • SSL.com’s root CA is in that bundle, so the *.reids.net.au chain validates automatically.

Essential it loads the system CA bundle, validates the full chain and succeeds with no extra config needed.

Openssl test

info

Should return verify return:1 and verification: OK.

openssl s_client \
-connect NAS_IP_ADDRESS:636 \
-servername nas.reids.net.au \
-CAfile /etc/ssl/certs/wildcard-reids.crt \
-showcerts </dev/null
CONNECTED(00000003)
depth=2
Certification Authority RSA
verify return:1
depth=1
subCA
verify return:1
depth=0 CN=*.reids.net.au
verify return:1

Server certificate
subject=CN=*.reids.net.au

SSL handshake has read 2889 bytes and written 404 bytes
Verification: OK

Openssl s_client behaves differently, it is a low-level test tool and by default it:

  • Does not always verify the hostname (you usually add -verify_hostname if you really want it), and
  • Relies on OpenSSL’s compiled-in CA path, which in a container can be:
    • Empty, or
    • Not pointing at /etc/ssl/cert.pem / /etc/ssl/certs.

So unless the image is configured to point OpenSSL at the same CA bundle as curl, you end up having to provide with a CA file.

LDAPS test

info

Should return all objects using the zitadel-bind user which has read-only access to the LDAP server.

LDAPTLS_CACERT=/etc/ssl/certs/wildcard-reids.crt \
LDAPTLS_REQCERT=allow \
ldapsearch -H ldaps://NAS_IP_ADDRESS:636 \
-x \
-D "uid=zitadel-bind,ou=people,dc=reids,dc=net,dc=au" \
-W \
-b "dc=reids,dc=net,dc=au" \
-LLL "(objectClass=*)"
  • ldaps://… does the SSL/TLS wrapping.
  • LDAPTLS_CACERT tells the OpenLDAP client which CA/chain file to trust, just like the openssl -CAfile /etc/ssl/certs/wildcard-reids.crt test.
dn: dc=reids,dc=net,dc=au
dc: reids
objectClass: domain

dn: ou=people,dc=reids,dc=net,dc=au
ou: people
objectClass: organizationalUnit

dn: ou=group,dc=reids,dc=net,dc=au
ou: group
objectClass: organizationalUnit

Ldapsearch with LDAPS needs a CAfile provided as OpenLDAP has its own TLS config layer. It may be using OpenSSL or GnuTLS under the hood, but it does not automatically inherit curl’s idea of trust.

It looks in:

  • /etc/openldap/ldap.conf (or distro equivalent), and/or
  • Environment variables.

If OpenLDAP does not find a trusted CA via those settings, it will treat the server cert as untrusted unless you explicitly pass -ZZ (STARTTLS) with relaxed verification options, which is not useful for this testing.


Migrate the manually created secrets

When the apps namespace is confirmed that the wildcard-reids-bundle ConfigMap has been automatically deployed the manually created secrets can be migrated.

Check existing usage

kubectl get secrets -A --field-selector metadata.name=wildcard-reids-tls
NAMESPACE           NAME                 TYPE                DATA   AGE
gitlab wildcard-reids-tls kubernetes.io/tls 2 144d
identity-internal wildcard-reids-tls kubernetes.io/tls 2 19h
ingress-nginx wildcard-reids-tls kubernetes.io/tls 2 144d
note

Only identity-internal is currently managed by Flux, gitlab and ingress-nginx are not and this process does not apply to them.

Add the cert bundle to the identity-internal namespace through Git

Label trusted namespaces in Git (not with kubectl).

The wildcard-reids-tls secret was created manually in identity-internal for ZITADEL LDAPS this will be replaced after migrating to the new trust path.

warning

Do not rely solely on kubectl label namespace. Namespaces are managed by Flux; any delete/recreate or drift could drop labels that are not in Git.

Add trust: enabled directly into the namespace manifests under flux-config.

# clusters/my-cluster/identity-internal/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: identity-internal
labels:
name: identity-internal
trust: enabled
role: app

Commit and push, then force Flux to reconcile:

git add .
git commit -m "Label trusted namespaces for wildcard TLS bundle"
git push

Reduce wait time by aligning the cluster with Flux by enabling trust for the identity-internal namespace.

kubectl label ns identity-internal trust=enabled --overwrite
kubectl get ns -L trust
NAME                   STATUS   AGE     TRUST
apps Active 144d enabled
identity-internal Active 31h enabled
kubectl get configmap -A | grep wildcard-reids-bundle || true
apps                   wildcard-reids-bundle                                  1      135m
identity-internal wildcard-reids-bundle 1 32s
kubectl get ns blaster-dev -o jsonpath='{.metadata.labels.trust}'; echo
flux reconcile kustomization flux-system --with-source

The identity-internal namespace now has the manually created secret wildcard-reids-tls and the auto deployed config map wildcard-reids-bundle.


Configure ZITADEL (and others) to the bundle

Update the ZITADEL HelmRelease to use the bundle path as its CA file:

info

From the identity-internal repo not the flux repo.

  1. Remove all the custom mounting:
    # NEW: mount wildcard-reids-tls and use it as CA for LDAP / TLS trust
extraVolumes:
- name: ldap-ca
secret:
secretName: wildcard-reids-tls
items:
- key: tls.crt
path: nas-ldap-ca.pem

extraVolumeMounts:
- name: ldap-ca
mountPath: /etc/ssl/certs/nas-ldap-ca.pem
subPath: nas-ldap-ca.pem
readOnly: true

extraEnv:
- name: SSL_CERT_FILE
value: /etc/ssl/certs/nas-ldap-ca.pem
  1. Add just this under env:
# k8s/prod/50-helm-release-zitadel.yaml
- name: SSL_CERT_FILE
value: /etc/ssl/certs/wildcard-reids.crt
  1. The env block should look like this:

# Environment variables for database passwords
env:
- name: ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: zitadel-db-secret
key: POSTGRES_PASSWORD
- name: ZITADEL_DATABASE_POSTGRES_USER_PASSWORD
valueFrom:
secretKeyRef:
name: zitadel-db-secret
key: POSTGRES_PASSWORD
- name: SSL_CERT_FILE
value: /etc/ssl/certs/wildcard-reids.crt

This tells Go’s TLS stack inside ZITADEL to trust the wildcard certificate presented by your NAS (and any other endpoint using that cert), solving x509: certificate signed by unknown authority without disabling verification.

Other workloads that need to trust the wildcard can follow the same pattern.


Reconcile Flux to take effect

Commit and push, then force Flux to reconcile:

git add .
git commit -m "Changed Zitadel to use wildcard from file."
git push
flux reconcile kustomization flux-system --with-source

Validation and checks

Verify ZITADEL still works

Once happy in apps, repeat the same pod test pattern in identity-internal or:

  • Ensure ZITADEL pods have restarted after the HelmRelease change.
  • Trigger the LDAP test again in the ZITADEL console.
  • Confirm no x509: certificate signed by unknown authority errors appear in the logs.

Result in each trusted namespace:

apiVersion: v1
kind: ConfigMap
metadata:
name: wildcard-reids-bundle
namespace: identity-internal
data:
wildcard-reids.crt: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----

Check:

kubectl get configmap wildcard-reids-bundle -n identity-internal -o yaml

Delete the manually created secret

When the bundle-driven trust is confirmed delete the manually created secret.

Check existing usage

kubectl get configmap wildcard-reids-bundle -n identity-internal -o yaml

Returns:

kind: ConfigMap
metadata:
annotations:
trust.cert-manager.io/hash:
labels:
trust.cert-manager.io/bundle: wildcard-reids-bundle
name: wildcard-reids-bundle
namespace: identity-internal
ownerReferences:
- apiVersion: trust.cert-manager.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Bundle
name: wildcard-reids-bundle
kubectl get secrets -A --field-selector metadata.name=wildcard-reids-tls

Returns:

NAMESPACE           NAME                 TYPE                DATA   AGE
identity-internal wildcard-reids-tls kubernetes.io/tls 2 19h
note

Only identity-internal is currently managed by Flux; gitlab and ingress-nginx are not and this process does not apply to them yet.

Remove the manual secret from identity-internal

  1. Confirm ZITADEL no longer references the wildcard-reids-tls secret directly (only uses SSL_CERT_FILE).
  2. Remove any YAML under clusters/my-cluster/identity-internal that creates wildcard-reids-tls (if you ever commit it there).
  3. Delete the manual secret if it still exists:
kubectl delete secret wildcard-reids-tls -n identity-internal

All future trust now flows from:

cert-manager/wildcard-reids-tls → trust-manager Bundlewildcard-reids-bundle ConfigMap → Gatekeeper volume/volumeMount → /etc/ssl/certs/wildcard-reids.crt.

Confirm that identity-internal no longer has a secret

kubectl get secrets -A --field-selector metadata.name=wildcard-reids-tls
NAMESPACE       NAME                 TYPE                DATA   AGE
cert-manager wildcard-reids-tls kubernetes.io/tls 2 5h
gitlab wildcard-reids-tls kubernetes.io/tls 2 144d
infra-trust wildcard-reids-tls kubernetes.io/tls 2 4h56m
ingress-nginx wildcard-reids-tls kubernetes.io/tls 2 144d

Remove the flux secret from infa-trust

Earlier, I did have a wildcard-reids-tls secret defined under infra-trust, applied by Flux.

Flux added the labels:

kustomize.toolkit.fluxcd.io/name=flux-system
kustomize.toolkit.fluxcd.io/namespace=flux-system
kubectl -n infra-trust describe secret wildcard-reids-tls
Name:         wildcard-reids-tls
Namespace: infra-trust
Labels: kustomize.toolkit.fluxcd.io/name=flux-system
kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations: <none>

Type: kubernetes.io/tls

Data
====
tls.crt: 7257 bytes
tls.key: 3272 bytes
  • Canonical secret now lives in cert-manager, and I removed the infra-trust YAML.
  • Flux prune doesn’t “garbage collect everything with its label”; it only prunes objects that are still in a Kustomization’s inventory. Once the manifest is removed, that secret becomes an orphan and is left alone.

So right now:

  • cert-manager/wildcard-reids-tls is the only Flux-desired TLS secret.
  • infra-trust/wildcard-reids-tls is a stale copy that nothing in Git mentions.

Safe to delete the infra-trust one:

kubectl -n infra-trust delete secret wildcard-reids-tls
kubectl get secrets -A --field-selector metadata.name=wildcard-reids-tls
secret "wildcard-reids-tls" deleted from infra-trust namespace
NAMESPACE NAME TYPE DATA AGE
cert-manager wildcard-reids-tls kubernetes.io/tls 2 5h25m
gitlab wildcard-reids-tls kubernetes.io/tls 2 144d
ingress-nginx wildcard-reids-tls kubernetes.io/tls 2 144d
  • cert-manager (Flux + SOPS, canonical)
  • gitlab (manual, namespace not managed by Flux)
  • ingress-nginx (manual, namespace not managed by Flux)

New trust flow

All trust now flows in this order:

  1. cert-manager/wildcard-reids-tls (canonical TLS Secret, managed by Flux + SOPS) →
  2. trust-manager Bundle (wildcard-reids-bundle in cert-manager) →
  3. ConfigMap/wildcard-reids-bundle in all namespaces labelled trust=enabled
  4. Gatekeeper Assign mutators add a wildcard-reids-bundle volume and mount /etc/ssl/certs/wildcard-reids.crt into all Pods in trusted namespaces →
  5. Apps that need LDAPS/HTTPS trust set SSL_CERT_FILE=/etc/ssl/certs/wildcard-reids.crt.

Verification checklist

  • infra-trust namespace exists and is managed by Flux.
  • wildcard-reids-tls secret exists in cert-manager and is SOPS-encrypted in Git.
  • Bundle wildcard-reids-bundle applied and ConfigMap wildcard-reids-bundle exists in each trust: enabled namespace.
  • Gatekeeper Pods are running in infra-trust.
  • Gatekeeper Assign resources exist and are STATUS: enforced.
  • Pods in trusted namespaces have:
    • A wildcard-reids-bundle volume.
    • /etc/ssl/certs/wildcard-reids.crt mounted in containers.
  • ZITADEL and other apps that talk to LDAPS/HTTPS set SSL_CERT_FILE=/etc/ssl/certs/wildcard-reids.crt.
  • Namespace labels trust=enabled are defined in Git and visible via kubectl get ns -o jsonpath.