IdP internal deployment
Use this guide to deploy the internal ZITADEL instance (auth.reids.net.au) into the cluster using FluxCD.
This runbook covers the Kubernetes and GitOps foundation only:
identity-internalrepo structure and manifests (PostgreSQL, ZITADEL HelmRelease, ingress/TLS).- Flux source and Kustomization wiring so the deployment reconciles cleanly.
It does not cover post-deploy configuration such as SMTP, NAS LDAP, Kubernetes OIDC (kubelogin), RBAC, or Kubernetes Dashboard SSO. Those are documented in the dedicated runbooks in this series.
Identity provider series
- IdP dual overview
- IdP dual architecture
- IdP internal deployment - you are here
- IdP internal console
- IdP internal SMTP
- IdP internal LDAP
- IdP internal OIDC
- IdP internal OAUTH2 proxy
- IdP internal backup and restore
1. Goal
Deploy the internal ZITADEL instance (auth.reids.net.au) into the cluster using FluxCD, including the backing PostgreSQL database and ingress/TLS, so it is ready for console configuration and downstream integrations.
This runbook covers:
identity-internaldeployment into the cluster (namespace, HelmRelease, ingress/TLS).- PostgreSQL backing DB for internal ZITADEL.
- Flux configuration for reconciling
identity-internal. - Verification that ZITADEL is reachable and healthy (deployment-level checks).
Out of scope (covered in dedicated runbooks in this series):
- Initial console configuration
- SMTP
- NAS LDAP IdP configuration
- Kubernetes API OIDC integration (including
groupsClaimAction and RBAC) - Kubernetes Dashboard SSO via OAuth2 Proxy
2. Prerequisites
- Working
kubectlaccess to the cluster with cluster-admin rights. - FluxCD installed and managing the
flux-configrepo. - The
identity-internalGit repository exists and is reachable by Flux. - SOPS age key configured in the cluster and available as the
sops-agesecret influx-system. - DNS resolves
auth.reids.net.auto the ingress entrypoint you intend to use. - Cluster ingress and TLS are ready (for example NGINX ingress plus cert-manager/trust as per your trust runbook).
- StorageClass available for PostgreSQL persistent volumes.
LDAP requirements (bind DN, user DN, groups, NAS LDAPS certificate trust) are prerequisites for the LDAP runbook, not for deploying the ZITADEL stack itself.
3. Target architecture
Repository structure (his runbook builds
flux-configandidentity-internal).
1. flux-config
└── clusters/my-cluster/
├── identity-internal
│ ├── 10-kustomization-app.yaml
│ ├── kustomization.yaml
│ ├── namespace.yaml
│ └── source.yaml
├── identity-public
│ ├── 00-kustomization-ns.yaml
│ ├── 10-kustomization-app.yaml
│ ├── kustomization.yaml
│ ├── ns
│ │ ├── kustomization.yaml
│ │ └── namespace.yaml
│ └── source.yaml
2. identity-internal
├── .gitignore
├── .sops.yaml
├── k8s
│ └── prod
│ ├── 10-helm-repository.yaml
│ ├── 20-secrets-db.enc.yaml
│ ├── 21-secrets-zitadel.enc.yaml
│ ├── 30-db-statefulset.yaml
│ ├── 50-helm-release-zitadel.yaml
│ ├── 60-ingress.yaml
│ └── kustomization.yaml
3. identity-public (not created yet)
└── k8s/prod/
├── 10-helm-repository.yaml
├── 20-secrets-db.enc.yaml
├── 30-db-statefulset.yaml
├── 50-helm-release-zitadel.yaml
└── ...
4. Configure identity-internal repo
Start with the identity-internal repo. Create a blank repo in GitLab and clone into your Projects folder.
4.1 Clone empty identity-interal repo
cd ~/Projects
git clone https://gitlab.reids.net.au/muppit-apps/identity-internal.git
cd identity-internal
mkdir -p k8s/prod
# sanity checks
git status
git remote -v
# first commit (so the repo is no longer empty)
touch .gitkeep
git add .gitkeep k8s/prod
git commit -m "chore: initialise identity-internal repo structure"
git push -u origin HEAD
4.2 Kustomization
# k8s/prod/kustomization.yaml
# Kustomization for ZITADEL #2 - Internal Identity Provider
#
# Purpose: Internal infrastructure authentication (staff only)
# Issuer: https://auth.reids.net.au
# Identity sources: NAS LDAP (primary) + ZITADEL-local (emergency)
# Reachability: Internal network / VPN only
#
# This IdP has access to:
# - NAS LDAP directory
# - Kubernetes API server (OIDC)
# - Internal *.reids.net.au services
#
# This IdP is NOT exposed to public internet.
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: identity-internal
resources:
- 10-helm-repository.yaml
- 20-secrets-db.enc.yaml
- 21-secrets-zitadel.enc.yaml
- 30-db-statefulset.yaml
- 50-helm-release-zitadel.yaml
- 60-ingress.yaml
4.3 Helm repository
# k8s/prod/10-helm-repository.yaml
# ZITADEL official Helm chart repository
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: zitadel
namespace: identity-internal
spec:
interval: 24h0m0s
url: https://charts.zitadel.com
4.4 Database secret
- Generate password:
INTERNAL_DB_PASSWORD=$(openssl rand -base64 32)
echo "Internal DB Password: $INTERNAL_DB_PASSWORD"
- Create database secret:
cat > k8s/prod/20-secrets-db.enc.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
name: zitadel-db-secret
namespace: identity-internal
type: Opaque
stringData:
POSTGRES_USER: zitadel
POSTGRES_PASSWORD: $INTERNAL_DB_PASSWORD
POSTGRES_DB: zitadel
EOF
- Encrypt secret with SOPS:
sops -e -i k8s/prod/20-secrets-db.enc.yaml
4.5 Master key secret
The masterkey must effectively resolve to 32 bytes for ZITADEL. Avoid simply feeding openssl rand -base64 32 directly to ZITADEL.
- Generate masterkey (exactly 32 bytes for ZITADEL):
INTERNAL_MASTER_KEY=$(head -c 32 /dev/urandom | base64 | tr -d '=' | head -c 32)
echo "Internal Master Key: $INTERNAL_MASTER_KEY (32 bytes)"
- ZITADEL masterkey secret:
cat > k8s/prod/21-secrets-zitadel.enc.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
name: zitadel-secret
namespace: identity-internal
type: Opaque
stringData:
masterkey: $INTERNAL_MASTER_KEY
EOF
- Encrypt secrets with SOPS:
sops -e -i k8s/prod/20-secrets-db.enc.yaml
sops -e -i k8s/prod/21-secrets-zitadel.enc.yaml
4.6 ZITADEL database
# k8s/prod/30-db-statefulset.yaml
# PostgreSQL database for ZITADEL Internal IdP
# Separate database instance from public IdP for security isolation
---
apiVersion: v1
kind: Service
metadata:
name: zitadel-db
namespace: identity-internal
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
protocol: TCP
name: postgres
selector:
app: zitadel-db
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zitadel-db
namespace: identity-internal
spec:
serviceName: zitadel-db
replicas: 1
selector:
matchLabels:
app: zitadel-db
template:
metadata:
labels:
app: zitadel-db
spec:
containers:
- name: postgres
image: postgres:16.4-alpine # Pin to specific version for production stability
ports:
- containerPort: 5432
name: postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: zitadel-db-secret
key: POSTGRES_PASSWORD
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
resources:
requests:
cpu: 250m
memory: 512Mi
limits:
cpu: 1000m
memory: 2Gi
livenessProbe:
exec:
command:
- /bin/sh
- -c
- pg_isready -U postgres
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
exec:
command:
- /bin/sh
- -c
- pg_isready -U postgres
initialDelaySeconds: 5
periodSeconds: 5
volumeClaimTemplates:
- metadata:
name: postgres-storage
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
4.7 ZITADEL HelmRelease
- The chart and image are pinned.
- The
configmapConfigcontainsFirstInstanceand no secret references. - Database passwords are configured via
envonly, and ZITADEL picks up wildcard trust viaSSL_CERT_FILE(from the separate wildcard TLS trust runbook).
The INITIAL_LOGIN_PASSWORD you set here is only used once for the bootstrap, you change your password on successful login and the INITIAL_LOGIN_PASSWORD becomes redundant and unusable.
# k8s/prod/50-helm-release-zitadel.yaml
# ZITADEL #2 - Internal Identity Provider
# Issuer: https://auth.reids.net.au
# Purpose: Internal staff and infrastructure authentication
# Identity: NAS LDAP (primary) + ZITADEL-local (emergency)
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: zitadel
namespace: identity-internal
spec:
interval: 30m
chart:
spec:
chart: zitadel
version: "8.13.4" # ZITADEL v2.67.2
sourceRef:
kind: HelmRepository
name: zitadel
namespace: identity-internal
interval: 12h
install:
remediation:
retries: 3
upgrade:
remediation:
retries: 3
values:
image:
repository: ghcr.io/zitadel/zitadel
tag: v2.67.2
pullPolicy: IfNotPresent
replicaCount: 2
zitadel:
masterkeySecretName: zitadel-secret
# Database connection via external secret
dbSslMode: disable
# This config is turned into the steps/config map that the init + setup jobs read.
configmapConfig:
ExternalDomain: auth.reids.net.au
ExternalPort: 443
ExternalSecure: true
TLS:
Enabled: false
Database:
Postgres:
Host: zitadel-db
Port: 5432
Database: zitadel
User:
Username: zitadel
SSL:
Mode: disable
Admin:
Username: postgres
SSL:
Mode: disable
Log:
Level: info
Formatter:
Format: json
# IMPORTANT: Seed the very first instance + admin user here.
FirstInstance:
InstanceName: "reids-internal"
DefaultLanguage: en
Org:
Name: "Reid's Internal"
Human:
UserName: "admin"
FirstName: "Admin"
LastName: "User"
Email:
Address: "INITIAL_LOGIN_EMAIL_ADDRESS"
Password: "INITIAL_LOGIN_PASSWORD"
Machine:
Identification:
Hostname:
Enabled: true
Webhook:
Enabled: false
# OIDC configuration for Kubernetes integration
OIDC:
AdditionalClaims:
- groups
# Environment variables for database passwords
env:
- name: ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: zitadel-db-secret
key: POSTGRES_PASSWORD
- name: ZITADEL_DATABASE_POSTGRES_USER_PASSWORD
valueFrom:
secretKeyRef:
name: zitadel-db-secret
key: POSTGRES_PASSWORD
- name: SSL_CERT_FILE
value: /etc/ssl/certs/wildcard-reids.crt
# ClusterIP service (nginx Ingress terminates TLS)
service:
type: ClusterIP
port: 8080
protocol: http2
ingress:
enabled: false # separate Ingress manifest
resources:
requests:
cpu: 200m
memory: 512Mi
limits:
cpu: 1000m
memory: 2Gi
livenessProbe:
httpGet:
path: /debug/healthz
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /debug/ready
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
securityContext:
runAsNonRoot: true
runAsUser: 1000
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
capabilities:
drop:
- ALL
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- zitadel
topologyKey: kubernetes.io/hostname
Do not configure LDAP or SMTP credentials via Kubernetes Secrets or environment variables.
LDAP and SMTP are both configured directly in the ZITADEL console after install.
4.8 Ingress
# k8s/prod/60-ingress.yaml
# Ingress for ZITADEL Internal (auth.reids.net.au)
# Internal network only - NOT exposed to public internet
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: zitadel-ingress
namespace: identity-internal
annotations:
kubernetes.io/ingress.class: nginx
spec:
ingressClassName: nginx
tls:
- hosts:
- auth.reids.net.au
# Note: No secretName specified - relies on default/wildcard certificate
# for *.reids.net.au configured on the nginx ingress controller
rules:
- host: "auth.reids.net.au"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: zitadel
port:
number: 8080
4.9 Commit identity-internal changes
git add .
git commit -m "feat: Bootstrap internal ZITADEL via FirstInstance"
git push
5. Stage 1 – Configure identity-internal into Flux
5.1 Create Flux directory
cd ~/Projects/flux-config
git pull
mkdir -p clusters/my-cluster/identity-internal
5.2 Kustomization
# clusters/my-cluster/identity-internal/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- source.yaml
- 10-kustomization-app.yaml
5.3 Namespace
# clusters/my-cluster/identity-internal/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: identity-internal
labels:
name: identity-internal
trust: enabled
role: app
5.4 Flux Kustomization
# clusters/my-cluster/identity-internal/10-kustomization-app.yaml
# Flux Kustomization for ZITADEL #2 (Internal IdP)
# Issuer: https://auth.reids.net.au
# Purpose: Internal infrastructure authentication (staff only)
# Identity: NAS LDAP (primary) + ZITADEL-local (emergency)
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: identity-internal
namespace: flux-system
spec:
interval: 1m0s
path: ./k8s/prod
prune: true
sourceRef:
kind: GitRepository
name: identity-internal
namespace: flux-system
targetNamespace: identity-internal
wait: true
timeout: 5m0s
decryption:
provider: sops
secretRef:
name: sops-age
5.5 Source
# clusters/my-cluster/identity-internal/source.yaml
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
name: identity-internal
namespace: flux-system
spec:
interval: 1m0s
timeout: 60s
url: ssh://git-ssh.reids.net.au/muppit-apps/identity-internal.git
ref:
branch: main
secretRef:
name: flux-ssh-auth
ignore: |
# Ignore everything except k8s directory
/*
!/k8s/
5.6 Add to cluster-level Kustomization
Add
identity-internaltoclusters/my-cluster/kustomization.yaml.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- cert-manager.yaml
- ./flux-system
- ./apps
- ./dev
- ./origin-ca-issuer
- ./cloudflare
- ./blaster
- ./identity-internal
- ./infra-trust
- ./cert-manager
5.7 Commit changes to flux-config repo
git add .
git commit -m "feat: Add identity-internal (ZITADEL #2)"
git push
5.8 Trigger reconciliation
flux reconcile source git flux-system
flux reconcile kustomization identity-internal --with-source
5.9 Monitor deployment
kubectl get all -n identity-internal -o wide
kubectl get pods -n identity-internal -w
Expected once steady:
kubectl get pods -n identity-internal -w
NAME READY STATUS RESTARTS AGE
zitadel-78f7bd7d56-lwvlr 1/1 Running 0 3h52m
zitadel-78f7bd7d56-pxkp9 1/1 Running 0 3h52m
zitadel-db-0 1/1 Running 0 35h
zitadel-init-lkp89 0/1 Completed 0 3h52m
zitadel-setup-sbfvt 0/1 Completed 0 3h52m