Skip to main content

IdP dual architecture

info

Architecture overview for the dual-IdP setup.

Identity provider series

  1. IdP dual overview
  2. IdP dual architecture - you are here
  3. IdP internal deployment
  4. IdP internal console
  5. IdP internal SMTP
  6. IdP internal LDAP
  7. IdP internal OIDC
  8. IdP internal OAUTH2 proxy
  9. IdP internal backup and restore

Identity model

Two separate ZITADEL instances, each with a clear responsibility boundary.

IdPHostnameAudienceIdentity sourcesReachability
Public IdPauth.muppit.auInternet users and public appsZITADEL-local accounts, social login (optional/future)Internet via Cloudflare
Internal IdPauth.reids.net.auInternal staff and infrastructureNAS LDAP plus ZITADEL-local break-glass accountsInternal network (and/or VPN)

Key rules:

  • The public IdP never talks to NAS LDAP or internal cluster services.
  • The internal IdP is the only one allowed to:
    • use NAS LDAP as an identity provider
    • integrate with the Kubernetes API server and Kubernetes Dashboard
    • authenticate internal-only apps

This is two separate identity domains that happen to share a Kubernetes cluster and GitOps tooling.

GitOps layout

Identity follows the same GitOps pattern as the rest of the cluster:

  • flux-config owns:
    • namespaces (identity-public, identity-internal)
    • Flux GitRepository and Kustomization objects, pointing at the IdP repos
  • identity-public and identity-internal repos own:
    • ZITADEL HelmRelease
    • PostgreSQL resources
    • SOPS-encrypted secrets (*.enc.yaml)
    • IdP-specific ingress, tunnels, and any supporting manifests

Conceptually:

  • flux-config answers: “what runs where, and where does it source from?”
  • identity-* answers: “how is ZITADEL (and its data) deployed and configured?”
note

Console settings (SMTP/LDAP providers, projects, roles, Actions) are stored in the IdP database. Treat PostgreSQL backups as mandatory, not optional.

Data and secrets

Both IdPs share the same baseline pattern:

  • One PostgreSQL database per IdP
    • runs as a StatefulSet with its own PVC (or is provided by a managed service)
    • holds all ZITADEL state for that instance (users, orgs, projects, SMTP, Actions, LDAP config, etc.)
  • One ZITADEL masterkey per IdP
    • stored in a SOPS-encrypted Kubernetes Secret
    • also stored once in a password manager

Secrets split:

  • In Git (via SOPS):

    • database credentials for each IdP
    • ZITADEL masterkey for each IdP
    • internal LDAP bind credentials (internal only)
    • Cloudflare-related credentials used by that IdP deployment (for example tunnel credentials, DNS-01 API tokens)
  • In the ZITADEL console (persisted in PostgreSQL):

    • SMTP provider configuration
    • LDAP identity provider configuration (internal only)
    • projects, roles, authorisations (grants), Actions and flows

Trust and networking

Two different trust stories, one cluster.

Internal IdP (auth.reids.net.au)

  • Ingressed internally via NGINX Ingress.
  • Uses the existing *.reids.net.au wildcard TLS at the ingress layer.

For outbound TLS, including LDAPS to the NAS:

  • cluster-wide trust is handled by trust-manager plus Gatekeeper policy
  • trusted namespaces receive the required CA material
  • ZITADEL (and other pods) consume it via SSL_CERT_FILE (for example SSL_CERT_FILE=/etc/ssl/certs/wildcard-reids.crt)

Public IdP (auth.muppit.au)

  • Reached from the internet via Cloudflare (for example Argo Tunnel), not directly via your internal network.
  • Public TLS is handled via Cloudflare plus certificate management in-cluster (for example cert-manager with DNS-01), depending on how you terminate for that service.

The public IdP does not need outbound trust to the NAS or internal-only services because it is intentionally isolated from them.

How apps and infrastructure use the IdPs

The IdPs form the backbone for both human and application access.

Internal IdP usage

  • Kubernetes API server

    • Uses internal ZITADEL (auth.reids.net.au) as the OIDC issuer.
    • Kubernetes authorisation is driven by ZITADEL project roles (grants), flattened into a token groups claim via a ZITADEL Action (for example ["cluster-admin"], ["cluster-read-only"]).
    • Kubernetes maps groups to RBAC via --oidc-groups-claim=groups and --oidc-groups-prefix=oidc: (for example oidc:cluster-admin).
  • Kubernetes Dashboard

    • Protected by an OAuth2 Proxy in front of the Dashboard.
    • Users authenticate via auth.reids.net.au rather than pasting service account tokens.
    • Access is controlled by the same OIDC groups claim and Kubernetes RBAC bindings.
  • Internal apps (for example dev instances)

    • Use auth.reids.net.au as their OIDC issuer.
    • Authentication is via NAS-backed LDAP users.
    • Application authorisation should be based on ZITADEL roles/grants (optionally using Actions to emit app-friendly claims), rather than relying on NAS LDAP group membership directly.

Public IdP usage

  • Blaster production

    • Uses public ZITADEL (auth.muppit.au) as the OIDC issuer for end-user login (NextAuth in the app).
  • Fit production

    • Uses (or will use) auth.muppit.au with the same NextAuth pattern.
  • Future public apps

    • Re-use the same pattern and public IdP (auth.muppit.au).
    • Keep public identity isolated from internal infrastructure identity (no access to NAS, Kubernetes, or reids.net.au-only services).

Versioning and upgrades

Both IdPs should be pinned to explicit versions:

  • ZITADEL Helm chart version
  • ZITADEL container image tag
  • PostgreSQL image tag (or managed DB engine/version)

Upgrade flow:

  1. Change versions in the relevant IdP repo.
  2. Commit and push.
  3. Let Flux apply, or trigger flux reconcile.
  4. Validate login, console health, and any OIDC clients that depend on the issuer.