Skip to main content

Identity providers (IdP) overview

info

Introducing the IdP dual ZITADEL series. It explains why the IdP dual pattern exists, what it looks like in Kubernetes and GitOps, and the detailed runbooks.

Identity provider series

  1. IdP dual overview - you are here
  2. IdP dual architecture
  3. IdP internal deployment
  4. IdP internal console
  5. IdP internal SMTP
  6. IdP internal LDAP
  7. IdP internal OIDC
  8. IdP internal OAUTH2 proxy
  9. IdP internal backup and restore

Why a dual IdP

Operate two completely separate ZITADEL instances instead of one shared IdP.

The reasons:

  • Blast radius and risk separation

    • Public accounts and internet traffic live in the public IdP.
    • Internal staff, LDAP, Kubernetes and admin tools live in the internal IdP.
    • A misconfiguration or compromise in the public IdP does not expose NAS, LDAP or cluster-internal systems.
  • Different trust boundaries

    • Internal IdP can talk to NAS LDAP, Kubernetes API and other internal services.
    • Public IdP never connects directly to NAS or internal services and only sees traffic that comes in via Cloudflare.
  • Simpler policies per audience

    • Internal IdP can enforce stricter MFA, password and session policies for staff and infrastructure.
    • Public IdP can focus on user experience, social logins and public apps without dragging internal rules into the mix.
  • Cleaner app configuration

    • Internal apps (Blaster dev, Fit dev, Kubernetes Dashboard) always talk to auth.reids.net.au.
    • Public apps (Blaster prod, Fit prod and future apps) always talk to auth.muppit.au.
    • Each app has a single, clear IdP depending on whether it is internal or public.

The rest of the identity series documents how to implement this pattern and keep it recoverable.

Two instances

The plan is to run two ZITADEL instances:

  • auth.reids.net.au in the identity-internal namespace (internal IdP for staff and infrastructure).
  • auth.muppit.au in the identity-public namespace (public IdP for internet-facing apps).

Both instances are managed by FluxCD on Kubernetes and share the same core patterns:

  • One PostgreSQL database per instance in its own namespace.
  • SOPS-encrypted secrets in Git for database credentials and masterkey.
  • Wildcard TLS certificates with cluster-wide trust (trust-manager plus Gatekeeper and SSL_CERT_FILE).
  • Apps (Blaster, Fit and others) using OIDC via NextAuth.