IdP dual architecture
Architecture overview for the dual-IdP setup.
Identity provider series
- IdP dual overview
- IdP dual architecture - you are here
- IdP internal deployment
- IdP internal console
- IdP internal SMTP
- IdP internal LDAP
- IdP internal OIDC
- IdP internal OAUTH2 proxy
- IdP internal backup and restore
Identity model
Two separate ZITADEL instances, each with a clear responsibility boundary.
| IdP | Hostname | Audience | Identity sources | Reachability |
|---|---|---|---|---|
| Public IdP | auth.muppit.au | Internet users and public apps | ZITADEL-local accounts, social login (optional/future) | Internet via Cloudflare |
| Internal IdP | auth.reids.net.au | Internal staff and infrastructure | NAS LDAP plus ZITADEL-local break-glass accounts | Internal network (and/or VPN) |
Key rules:
- The public IdP never talks to NAS LDAP or internal cluster services.
- The internal IdP is the only one allowed to:
- use NAS LDAP as an identity provider
- integrate with the Kubernetes API server and Kubernetes Dashboard
- authenticate internal-only apps
This is two separate identity domains that happen to share a Kubernetes cluster and GitOps tooling.
GitOps layout
Identity follows the same GitOps pattern as the rest of the cluster:
flux-configowns:- namespaces (
identity-public,identity-internal) - Flux
GitRepositoryandKustomizationobjects, pointing at the IdP repos
- namespaces (
identity-publicandidentity-internalrepos own:- ZITADEL
HelmRelease - PostgreSQL resources
- SOPS-encrypted secrets (
*.enc.yaml) - IdP-specific ingress, tunnels, and any supporting manifests
- ZITADEL
Conceptually:
flux-configanswers: “what runs where, and where does it source from?”identity-*answers: “how is ZITADEL (and its data) deployed and configured?”
Console settings (SMTP/LDAP providers, projects, roles, Actions) are stored in the IdP database. Treat PostgreSQL backups as mandatory, not optional.
Data and secrets
Both IdPs share the same baseline pattern:
- One PostgreSQL database per IdP
- runs as a
StatefulSetwith its own PVC (or is provided by a managed service) - holds all ZITADEL state for that instance (users, orgs, projects, SMTP, Actions, LDAP config, etc.)
- runs as a
- One ZITADEL masterkey per IdP
- stored in a SOPS-encrypted Kubernetes
Secret - also stored once in a password manager
- stored in a SOPS-encrypted Kubernetes
Secrets split:
-
In Git (via SOPS):
- database credentials for each IdP
- ZITADEL masterkey for each IdP
- internal LDAP bind credentials (internal only)
- Cloudflare-related credentials used by that IdP deployment (for example tunnel credentials, DNS-01 API tokens)
-
In the ZITADEL console (persisted in PostgreSQL):
- SMTP provider configuration
- LDAP identity provider configuration (internal only)
- projects, roles, authorisations (grants), Actions and flows
Trust and networking
Two different trust stories, one cluster.
Internal IdP (auth.reids.net.au)
- Ingressed internally via NGINX Ingress.
- Uses the existing
*.reids.net.auwildcard TLS at the ingress layer.
For outbound TLS, including LDAPS to the NAS:
- cluster-wide trust is handled by
trust-managerplus Gatekeeper policy - trusted namespaces receive the required CA material
- ZITADEL (and other pods) consume it via
SSL_CERT_FILE(for exampleSSL_CERT_FILE=/etc/ssl/certs/wildcard-reids.crt)
Public IdP (auth.muppit.au)
- Reached from the internet via Cloudflare (for example Argo Tunnel), not directly via your internal network.
- Public TLS is handled via Cloudflare plus certificate management in-cluster (for example cert-manager with DNS-01), depending on how you terminate for that service.
The public IdP does not need outbound trust to the NAS or internal-only services because it is intentionally isolated from them.
How apps and infrastructure use the IdPs
The IdPs form the backbone for both human and application access.
Internal IdP usage
-
Kubernetes API server
- Uses internal ZITADEL (
auth.reids.net.au) as the OIDC issuer. - Kubernetes authorisation is driven by ZITADEL project roles (grants), flattened into a token
groupsclaim via a ZITADEL Action (for example["cluster-admin"],["cluster-read-only"]). - Kubernetes maps
groupsto RBAC via--oidc-groups-claim=groupsand--oidc-groups-prefix=oidc:(for exampleoidc:cluster-admin).
- Uses internal ZITADEL (
-
Kubernetes Dashboard
- Protected by an OAuth2 Proxy in front of the Dashboard.
- Users authenticate via
auth.reids.net.aurather than pasting service account tokens. - Access is controlled by the same OIDC
groupsclaim and Kubernetes RBAC bindings.
-
Internal apps (for example dev instances)
- Use
auth.reids.net.auas their OIDC issuer. - Authentication is via NAS-backed LDAP users.
- Application authorisation should be based on ZITADEL roles/grants (optionally using Actions to emit app-friendly claims), rather than relying on NAS LDAP group membership directly.
- Use
Public IdP usage
-
Blaster production
- Uses public ZITADEL (
auth.muppit.au) as the OIDC issuer for end-user login (NextAuth in the app).
- Uses public ZITADEL (
-
Fit production
- Uses (or will use)
auth.muppit.auwith the same NextAuth pattern.
- Uses (or will use)
-
Future public apps
- Re-use the same pattern and public IdP (
auth.muppit.au). - Keep public identity isolated from internal infrastructure identity (no access to NAS, Kubernetes, or
reids.net.au-only services).
- Re-use the same pattern and public IdP (
Versioning and upgrades
Both IdPs should be pinned to explicit versions:
- ZITADEL Helm chart version
- ZITADEL container image tag
- PostgreSQL image tag (or managed DB engine/version)
Upgrade flow:
- Change versions in the relevant IdP repo.
- Commit and push.
- Let Flux apply, or trigger
flux reconcile. - Validate login, console health, and any OIDC clients that depend on the issuer.