Skip to main content

IdP internal OIDC

info

Use this runbook to connect Kubernetes to the internal ZITADEL instance as an OIDC identity provider.

It covers both the ZITADEL console configuration that is specific to Kubernetes and the cluster-side configuration (API server flags and RBAC).

General internal IdP deployment (namespace, HelmRelease, PostgreSQL, SMTP, base tenants) is documented separately in the Internal Identity Deployment and Console guides.

Identity provider series

  1. IdP dual overview
  2. IdP dual architecture
  3. IdP internal deployment
  4. IdP internal console
  5. IdP internal SMTP
  6. IdP internal LDAP
  7. IdP internal OIDC - you are here
  8. IdP internal OAUTH2 proxy
  9. IdP internal backup and restore

0. Design decisions

important

There is an open issue where only the first memberOf value is mapped when you try to map multiple memberOf entries from LDAP.

0.1 ZITADEL limitation

  • If a user has memberOf: cn=k8s-admins,... and other groups, ZITADEL may only see the first one, depending on order.
  • That makes “LDAP group → groups claim” potentially risky if the user has more than one group assigned in LDAP.

LDAP attributes can be exposed, but multi-group fidelity is not guaranteed.

0.2 Use ZITADEL roles not NAS groups

  • LDAP is “how you log in”.
  • Authorisation is modelled by ZITADEL roles, not NAS groups.
  • LDAP memberOf is not used for authorisation.

1. Goal and scope

Goal: Configure Kubernetes components so that:

  • Users authenticate against the internal ZITADEL instance via OIDC.
  • ZITADEL roles (grants) are flattened into a groups claim in the token.
  • The Kubernetes API server reads that groups claim via OIDC.
  • Kubernetes RBAC grants permissions based on those groups.

This guide covers:

  • Console-side steps that are specific to Kubernetes:
    • Project and role for cluster administration.
    • kubernetes-api OIDC client for the API server and kubelogin.
  • Cluster-side steps:
    • API server OIDC flags.
    • ClusterRoleBinding for OIDC admins.

This is the same pattern used by tools such as Argo CD and Grafana: a ZITADEL Action converts project roles into a groups claim that downstream systems can consume as-is.

High-level flow:

  1. Keep kubeconfig clean, with a single, known-good admin context.
  2. In ZITADEL, grant roles (cluster-admin) to specific users.
  3. Create a ZITADEL Action to flatten project roles into a token groups claim.
  4. Configure the API server to use that groups claim for RBAC.
  5. Create RBAC bindings against those groups (for example oidc:cluster-admin).
  6. Add a clean kubelogin user and context to kubeconfig to test the full flow.
  7. Decode a real token to verify the claims.

1.1 Read-only cluster role

Provide a role called cluster-read-only that:

  • Can list, get and watch resources.
  • Cannot create, update or delete resources.
  • Cannot exec into pods or otherwise change running workloads.

The pattern:

  1. ZITADEL role cluster-read-only (project-level).
  2. groupsClaim Action emits "groups": ["cluster-read-only"] for users with that grant.
  3. Kubernetes API server reads groups and applies --oidc-groups-prefix=oidc:.
  4. A ClusterRoleBinding maps Group oidc:cluster-read-only to the built-in ClusterRole view.

Result: anyone with the ZITADEL role cluster-read-only gets read-only access wherever the view ClusterRole allows.


2. Prerequisites

Before changing any configuration, you should already have:

  • Internal ZITADEL deployed and healthy:
    • https://auth.reids.net.au/ui/console
  • Admin account created and password changed.
  • SMTP configured so ZITADEL can send emails.
  • NAS LDAP reachable from the cluster via LDAPS (nas.reids.net.au:636).
  • TLS trust for *.reids.net.au configured into the cluster so ZITADEL pods trust the NAS LDAPS endpoint via SSL_CERT_FILE.
info

Project roles are assigned to users via Authorizations (user grants). LDAP still matters because it is how users sign in (and how you keep identities central), but the mapping from 'this person is an admin' to cluster-admin is handled via ZITADEL roles and grants.

From the console side you will create in this guide:

  • Project: Kubernetes Infrastructure.
  • Role: cluster-admin in that project.
  • NAS LDAP identity provider: nas.
  • Authorizations that give selected users the cluster-admin role.
  • Application: kubernetes-api (web OIDC) used by kubelogin.

Keep any client IDs and secrets in your password manager.


3. Console configuration for Kubernetes

3.1 Kubernetes infrastructure project

Create Kubernetes Infrastructure project.

In the ZITADEL console at https://auth.reids.net.au:

  1. Go to Projects → Create new project.
  2. Set:
    • Name: Kubernetes Infrastructure
  3. Save.

Keep this project dedicated to Kubernetes-related roles and applications.


3.2 Kubernetes infrastructure roles

Create cluster-admin role.

Still in the Kubernetes Infrastructure project:

  1. Go to Projects → Kubernetes Infrastructure → Roles.
  2. Click New.
  3. Set:
    • Key: cluster-admin
    • Display Name: Cluster administrator
    • Leave Group empty (optional, for UI grouping only).
  4. Save.

You now have a role that represents full cluster administration rights and can be used across multiple Kubernetes-related apps.


3.3 Kubernetes infrastructure authorizations

Grant the cluster-admin role to the users who should administer the cluster.

For each admin user:

  1. Make sure they can sign in via LDAP (for example, a NAS user NAS_USERNAME).
  2. In the console, go to Projects → Kubernetes Infrastructure → Authorizations.
  3. Click New.
  4. Select the User from the dropdown list (for example AUTHORISED_USER_EMAIL).
  5. Select Roles: cluster-admin.
  6. Save.

If a user without any role on this project attempts to log in to a Kubernetes application, they will see:

Login not possible. The user is required to have at least one grant on the application.

Assigning at least one role (for example cluster-admin) removes this error.


3.4 Kubernetes infrastructure kubernetes-api application

Create kubernetes-api application.

This is the OIDC client used by kubelogin / kubectl and referenced by the API server.

  1. In the Kubernetes Infrastructure project go to Applications → New.
  2. Select Web as the type.
  3. Name the application:
    • Name: kubernetes-api
  4. Click Continue.

On the authentication method wizard:

  1. Choose PKCE (authorisation code flow with random hash).
  2. Continue to the redirect step.

On the redirect step:

  1. Enable Development Mode (allows http:// redirects).
  2. Add:
    • Redirect URIs: http://localhost:8000
    • Post Logout URIs: http://localhost:8000
  3. Confirm and create the application.


3.5 Adjust kubernetes-api OIDC configuration

Open the kubernetes-api app and adjust the settings.

  1. Go to Projects → Kubernetes Infrastructure → Applications → kubernetes-api.
  2. Under OIDC Configuration:
    • Application Type: Web
    • Response Types: Code
    • Authentication Method: None
    • Grant Types: tick both:
      • Authorization Code
      • Refresh Token
  3. Tick Refresh Token.
  4. Click Save.

When Refresh Token is enabled and the client requests offline_access, ZITADEL issues:

  • a short-lived access / ID token (what the API server uses), and
  • a long-lived refresh token.

kubelogin can silently swap the refresh token for new access tokens when they expire, so you stay logged in.

If Refresh Token is not enabled:

  • You still log in fine.
  • When the token expires, kubelogin has to send you back through the browser flow again.

3.5.1 Redirect settings

  1. Open the Redirect Settings tab.
  2. Ensure:
    • Development Mode: enabled.
    • Redirect URIs: http://localhost:8000
    • Post Logout URIs: http://localhost:8000
  3. Save.

3.5.2 Token settings

  1. Open the Token Settings tab.
  2. Leave Auth Token Type: Bearer Token.
  3. Tick:
    • If selected, the requested roles of the authenticated user are added to the access token.
    • User roles inside ID Token.
    • Optionally User Info inside ID Token (useful for debugging).
  4. Save.

These options ensure that the ID and access tokens contain role information which Kubernetes can later map to RBAC principals.

3.5.3 Discovery URLs and client secret

  1. Open the URLs tab and note the endpoints. The important values are:
    • Issuer: https://auth.reids.net.au
    • JWKS URI: https://auth.reids.net.au/oauth/v2/keys
    • Authorization Endpoint: https://auth.reids.net.au/oauth/v2/authorize
    • Token Endpoint: https://auth.reids.net.au/oauth/v2/token
  2. From the Actions menu, generate a Client Secret for kubernetes-api and store it securely.
EndpointURL
Authorization Endpointhttps://auth.reids.net.au/oauth/v2/authorize
Device Authorization Endpointhttps://auth.reids.net.au/oauth/v2/device_authorization
End Session Endpointhttps://auth.reids.net.au/oidc/v1/end_session
Introspection Endpointhttps://auth.reids.net.au/oauth/v2/introspect
JWKS URIhttps://auth.reids.net.au/oauth/v2/keys
Revocation Endpointhttps://auth.reids.net.au/oauth/v2/revoke
Token Endpointhttps://auth.reids.net.au/oauth/v2/token
Userinfo Endpointhttps://auth.reids.net.au/oidc/v1/userinfo

You will use:

  • https://auth.reids.net.au as the issuer URL.
  • The kubernetes-api client ID (a generated value, not literally kubernetes-api).
  • The generated client secret.

These are referenced by kubelogin and the API server but are not committed to Git.


4. Expose roles as a groups claim with a ZITADEL Action

Because there is no native “LDAP group DN → role” mapping, the supported pattern is:

  • Assign project roles via grants (Authorizations).
  • Use a ZITADEL Action to convert those roles into a groups claim in the token.
  • Configure Kubernetes to use that groups claim for RBAC.

4.1 Create the groupsClaim Action

In ZITADEL console:

  1. Go to Actions → New.
  2. Set:
    • Name: groupsClaim
    • Type: JavaScript Action.
  3. Use the following code as a starting point:
/**
* Flatten all project role keys into a "groups" claim.
*
* Resulting token claim example:
* "groups": ["cluster-admin", "viewer"]
*
* Flow: Complement Token
* Triggers: Pre Userinfo creation, Pre access token creation
*/
function groupsClaim(ctx, api) {
if (!ctx.v1.user || !ctx.v1.user.grants || ctx.v1.user.grants.count === 0) {
return;
}

const groups = [];

ctx.v1.user.grants.grants.forEach((grant) => {
// Optional: restrict to specific project by projectId:
// if (grant.projectId !== "KUBERNETES_INFRA_PROJECT_ID") {
// return;
// }

grant.roles.forEach((roleKey) => {
if (!groups.includes(roleKey)) {
groups.push(roleKey);
}
});
});

api.v1.claims.setClaim("groups", groups);
}
  1. Tick Allowed to fail (so an Action bug does not break all logins).
  2. Save the Action.

4.2 Attach the Action to the Complement Token flow

  1. Go to Actions → Flows.
  2. Select the Complement Token flow.
  3. Add a trigger:
    • Function: groupsClaim
    • Trigger: Pre Userinfo creation.
  4. Add another trigger:
    • Function: groupsClaim
    • Trigger: Pre access token creation.
  5. Save.

Now, whenever a token is created, ZITADEL will:

  • Inspect the user’s grants.
  • Collect the role keys from each grant.
  • Write them into a groups claim, for example ["cluster-admin"].

5. Configure API server to use the groups claim

Update the Kubernetes API server to use the new groups claim produced by the Action.

5.1 OIDC flags

warning

Do not save any other files in the /etc/kubernetes/manifests/ directory, such as a copy of the kube-apiserver.yaml. It is watched by the system and will restart the API server with possible failure.

Edit /etc/kubernetes/manifests/kube-apiserver.yaml on each control plane node and add the OIDC flags.

important

Put quotes around the --oidc-username-prefix and --oidc-groups-prefix lines. Without quotes, the API server will fail to start.

Example snippet (do not duplicate existing flags):

spec:
containers:
- command:
- kube-apiserver
# existing flags...
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key
# Add after tls-private-key-file
- --oidc-issuer-url=https://auth.reids.net.au
- --oidc-client-id=CHANGE-ME-KUBERNETES-API-OIDC-CLIENT-ID
- --oidc-username-claim=email
- --oidc-groups-claim=groups
- "--oidc-username-prefix=oidc:"
- "--oidc-groups-prefix=oidc:"
# Add before image
image: registry.k8s.io/kube-apiserver:v1.31.9
  • --oidc-client-id must match the Client ID of the kubernetes-api app in the Kubernetes Infrastructure project.
  • --oidc-groups-claim=groups tells the API server to read the groups claim written by the Action.
  • Prefixes (oidc:) keep OIDC identities distinct from any future in-cluster users.

5.2 Restart and verify API server

Save the manifest; the static pod will restart automatically. Wait up to a minute.

Monitor status from the control plane node:

# Local health
curl -k https://127.0.0.1:6443/healthz
# From your workstation (admin context)
kubectl get nodes

Only continue once the API server is healthy and kubectl get nodes works again from your workstation.

warning

During API server restart a number of cluster pods will go into the CrashLoopBackOff state.

Anything that talks to the API as part of its own liveness/readiness (Calico controllers, cert-manager bits, Flux controllers, metrics-server) will either:

  • fail probes and restart, or
  • hit BackOff / CrashLoopBackOff if the API is unavailable for long enough.

Workload pods that mount projected service account secrets (kube-api-access-*) or use the secret/configmap caches (like GitLab runner) will log FailedMount errors until the API / informers are healthy again.

Once the API server has come back and stabilised, all of these should naturally return to Ready without manual intervention; during control-plane changes you can expect these specific controllers to look “unhealthy” or CrashLoop until the API is reachable again.


6. RBAC binding using the OIDC groups claim

Once the API server is using the groups claim, any user with the cluster-admin role in ZITADEL will receive:

"groups": ["cluster-admin"]

The prefix --oidc-groups-prefix=oidc: means Kubernetes will see the group as oidc:cluster-admin.

6.1 ClusterRoleBinding for cluster-admins

From an admin context on your workstation:

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: Group
name: "oidc:cluster-admin"
apiGroup: rbac.authorization.k8s.io
EOF

Verify:

kubectl get clusterrolebinding oidc-cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"oidc-cluster-admin"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"cluster-admin"},"subjects":[{"apiGroup":"rbac.authorization.k8s.io","kind":"Group","name":"oidc:cluster-admin"}]}
creationTimestamp: "2025-11-25T02:32:21Z"
name: oidc-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:cluster-admin

This does the following:

  • Any user whose token contains groups: ["cluster-admin"] will be treated as part of the oidc:cluster-admin group.
  • The API server maps that claim to the subject in the ClusterRoleBinding.
  • Those users receive full administrative rights in the cluster.

You can repeat this pattern for other roles, for example mapping a future viewer role to a read-only ClusterRole.


7. Re-add a clean OIDC user and context via kubelogin

With kubeconfig reset, grants configured, the Action in place and the API server updated, you can now add a clean OIDC-based user and context to test the full flow.

7.1 Install kubelogin on macOS

Using Homebrew:

brew install int128/kubelogin/kubelogin

Confirm:

kubelogin version
kubelogin version v1.34.0 (go1.24.5 darwin_arm64)

7.2 Add OIDC user to kubeconfig

Assuming:

  • Cluster name: CLUSTER_NAME.
  • Context name: oidc-context.
  • User name: oidc-user.
  • OIDC client: kubernetes-api with a known client ID.

Run:

kubectl config set-credentials oidc-user \
--exec-api-version=client.authentication.k8s.io/v1 \
--exec-command=kubelogin \
--exec-arg=get-token \
--exec-arg=--oidc-issuer-url=https://auth.reids.net.au \
--exec-arg=--oidc-client-id=CHANGE-ME-KUBERNETES-API-OIDC-CLIENT-ID \
--exec-arg=--oidc-extra-scope=email \
--exec-arg=--oidc-extra-scope=profile \
--exec-arg=--oidc-extra-scope=offline_access \
--exec-interactive-mode=IfAvailable
User "oidc-user" set.

Then create a context that uses this user:

kubectl config set-context oidc-context --cluster=CLUSTER_NAME --user=oidc-user
Context "oidc-context" created.

Check contexts:

kubectl config get-contexts
CURRENT   NAME                               CLUSTER           AUTHINFO           NAMESPACE
oidc-context CLUSTER_NAME oidc-user

You should see:

  • oidc-context (cluster CLUSTER_NAME, user oidc-user).
kubectl config current-context

7.3 Test the OIDC flow end-to-end

Use the OIDC context per command:

kubectl --context=oidc-context get nodes

Or switch context until you deliberately switch again.

kubectl config use-context oidc-context

Expected behaviour on first run:

  1. kubelogin starts a local listener on http://localhost:8000.
  2. Your browser opens https://auth.reids.net.au/oauth/v2/authorize... with the kubernetes-api client.
  3. You log in as AUTHORISED_USER_EMAIL (or another user with the cluster-admin grant).
  4. ZITADEL calls the groupsClaim Action, which writes groups: ["cluster-admin"] into the token.
  5. The API server sees groups and prefixes it to oidc:cluster-admin.
  6. The ClusterRoleBinding grants cluster-admin rights and kubectl prints the node list.

If you instead see a Forbidden error, read the User value in the error message to confirm the username format and double-check that:

  • The user has a cluster-admin grant in Kubernetes Infrastructure.
  • The Action is running (token contains a groups claim).
  • The ClusterRoleBinding matches the prefixed group (oidc:cluster-admin).

7.4 Forcing a fresh login

If you need to force a new browser login (for example to test a different user), clear the local OIDC cache used by the plugin:

kubectl oidc-login clean                
Deleted the token cache from /Users/andy/.kube/cache/oidc-login
Deleted the token cache from the keyring

Run the test command again:

kubectl --context=oidc-test get nodes

You should be prompted to log in again.


8. Optional: decode a real ID token

To prove end-to-end that the token SSO path is working as intended, decode a live ID token issued for kubernetes-api.

  1. Find the latest cache entry:

    FILE=$(ls -t "${HOME}/.kube/cache/oidc-login"/* | head -1)
    cat "$FILE"

    You should see JSON similar to:

    {"id_token":"...","refresh_token":"..."}
  2. Decode the payload with Python 3:

    TOKEN=$(jq -r '.id_token' "$FILE")

    TOKEN="$TOKEN" python3 - << 'EOF'
    import base64, json, os

    token = os.environ["TOKEN"]
    parts = token.split(".")
    payload = parts[1]
    payload += "=" * (-len(payload) % 4)
    data = json.loads(base64.urlsafe_b64decode(payload))

    print(json.dumps(data, indent=2))
    EOF

    In the JSON output you should see:

    • iss: https://auth.reids.net.au.
    • aud and client_id matching the kubernetes-api client.
    • email: AUTHORISED_USER_EMAIL.
    • groups: ["cluster-admin"].
    • ZITADEL project role claims under urn:zitadel:iam:org:project:roles.

This proves that:

  • ZITADEL is issuing tokens with the correct issuer and audience.
  • The groupsClaim Action is populating groups correctly from grants.

9. Validation

9.1 ZITADEL health

curl -f https://auth.reids.net.au/debug/healthz

Expected: HTTP 200 from the internal ZITADEL instance.

9.2 LDAP login via console

  • Log in as a NAS-backed user via the ZITADEL console.
  • Confirm that the user appears in Projects → Kubernetes Infrastructure → Authorizations with the cluster-admin role if they should be an admin.

9.3 kubectl OIDC path

kubectl --context=oidc-test get nodes

Expected:

  • Browser-based login via auth.reids.net.au (on first run).
  • Successful node listing using OIDC user oidc:<email>.

If you see Forbidden with the correct OIDC username, auth is working and RBAC needs to be adjusted.


10. cluster-read-only role

Provide a role called cluster-read-only that:

  • Can list, get and watch resources.
  • Cannot create, update or delete resources.
  • Cannot exec into pods or otherwise change running workloads.

The pattern:

  1. ZITADEL role cluster-read-only (project-level).
  2. groupsClaim Action emits "groups": ["cluster-read-only"] for users with that grant.
  3. Kubernetes API server reads groups and applies --oidc-groups-prefix=oidc:.
  4. A ClusterRoleBinding maps Group oidc:cluster-read-only to the built-in ClusterRole view.

Result: anyone with the ZITADEL role cluster-read-only gets read-only access wherever the view ClusterRole allows.

10.1 Add a cluster-read-only role in ZITADEL

In the Kubernetes Infrastructure project:

  1. Go to Projects → Kubernetes Infrastructure → Roles.
  2. Click New.
  3. Set:
    • Key: cluster-read-only
    • Display name: Cluster read-only
  4. Save.

Because the groupsClaim Action flattens all project role keys into groups, any user with this role will get a token like:

"groups": ["cluster-read-only"]

If a user also has cluster-admin, you will see:

"groups": ["cluster-read-only", "cluster-admin"]

You do not need to change the Action; it already uses role keys.

10.2 Grant cluster-read-only to a user

Still in the Kubernetes Infrastructure project:

  1. Go to Projects → Kubernetes Infrastructure → Authorizations.
  2. Click New.
  3. Select:
    • User: a NAS-backed user who can sign in via LDAP.
    • Roles: tick cluster-read-only (and leave cluster-admin unticked for a pure read-only user).
  4. Save.

On the next login via kubelogin against the kubernetes-api client, that user’s token will include "groups": ["cluster-read-only"].

You can confirm this by decoding the ID token as described in the main OIDC guide; look for:

"groups": [
"cluster-read-only"
]

10.3. Map cluster-read-only to Kubernetes RBAC

The API server is already configured with:

  • --oidc-groups-claim=groups
  • --oidc-groups-prefix=oidc:

That means a token containing:

"groups": ["cluster-read-only"]

is seen by Kubernetes as the group:

oidc:cluster-read-only

Kubernetes ships with a built-in ClusterRole called view that:

  • Allows list/get/watch of most namespaced resources.
  • Does not allow writes (create/update/patch/delete).
  • Does not include powerful subresources such as pods/exec.

Create a ClusterRoleBinding mapping oidc:cluster-read-only to this built-in role:

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: Group
name: "oidc:cluster-read-only"
apiGroup: rbac.authorization.k8s.io
EOF

Verify:

kubectl get clusterrolebinding oidc-viewer -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"oidc-viewer"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"view"},"subjects":[{"apiGroup":"rbac.authorization.k8s.io","kind":"Group","name":"oidc:cluster-read-only"}]}
creationTimestamp: "2025-11-25T08:14:26Z"
name: oidc-viewer
resourceVersion: "56425581"
uid: bbb02183-202b-4078-85ab-283448c58ddb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:cluster-read-only

You should see roleRef.name: view and subjects[0].name: oidc:cluster-read-only.

10.4 Test the viewer permissions

Assuming you already have a kubelogin-based context (for example oidc-context) for a user who only has the cluster-read-only role:

10.4.1 Basic can-i checks

# Nodes are cluster-scoped; view does not usually include them, so this may be "no"
kubectl --context=oidc-context auth can-i list nodes

# Should be allowed (namespaced reads)
kubectl --context=oidc-context auth can-i list pods -A
kubectl --context=oidc-context auth can-i get deployments -A

# Should be denied (writes)
kubectl --context=oidc-context auth can-i delete pod some-pod -n default
kubectl --context=oidc-context auth can-i create deployment -n default

Expected:

  • Read verbs (get, list, watch) for standard namespaced resources: yes.
  • Cluster-scoped resources such as nodes: typically no when using the default view ClusterRole.
  • Write verbs (create, update, patch, delete): no.

10.4.2 Logs vs exec

Check that logs are readable but exec is blocked:

# Logs are read-only, should be allowed
kubectl --context=oidc-context auth can-i get pod/some-pod --subresource=log -n default

# Exec can mutate state, should be denied
kubectl --context=oidc-context auth can-i create pod/some-pod --subresource=exec -n default

Again:

  • Logs: yes.
  • Exec: no.

10.5 Troubleshooting

10.5.1 Viewer user gets Forbidden on reads

Check:

  1. The user has a cluster-read-only grant in Kubernetes Infrastructure:
    • Projects → Kubernetes Infrastructure → Authorizations.
  2. The ID token includes "groups": ["cluster-read-only"]:
    • Decode a fresh token and inspect the JSON.
  3. The ClusterRoleBinding exists and is correct:
    • kubectl get clusterrolebinding oidc-viewer -o yaml
    • roleRef.name is view.
    • subjects[0].name is oidc:cluster-read-only.

If the token has groups but RBAC still denies, the problem is almost always in the ClusterRoleBinding (wrong group name, typo, or missing prefix).

10.5.2 Viewer user can delete or modify resources

Check:

  1. The user does not have cluster-admin (or other powerful) roles granted in ZITADEL for this project.
  2. There are no extra ClusterRoleBinding objects that grant that user or group elevated roles:
    • kubectl get clusterrolebinding -o yaml | grep -n "oidc-viewer"
    • kubectl get clusterrolebinding -o yaml | grep -n "AUTHORISED_USER_EMAIL"

Remove or correct any stray bindings you created during testing.


11. Rollback

If needed, you can roll back the Kubernetes-side configuration.

11.1 Remove OIDC flags from API server

  • Edit /etc/kubernetes/manifests/kube-apiserver.yaml.
  • Remove or comment the --oidc-* flags.
  • Save and allow the API server pods to restart.

11.2 Remove ClusterRoleBinding

kubectl delete clusterrolebinding oidc-cluster-admin

(Or remove any other OIDC-related ClusterRoleBinding objects you created for testing.)


12. Verification checklist

  • Kubeconfig is clean and admin context works.
  • Kubernetes Infrastructure project exists with cluster-admin role.
  • AUTHORISED_USER_EMAIL (or equivalent) has a cluster-admin grant in the project.
  • The groupsClaim Action exists and is attached to the Complement Token flow.
  • Tokens for the kubernetes-api client contain a groups claim (for example ["cluster-admin"]).
  • kube-apiserver is configured with --oidc-groups-claim=groups and prefix flags are quoted.
  • A ClusterRoleBinding exists mapping oidc:cluster-admin to the built-in cluster-admin role.
  • kubectl --context=oidc-test get nodes works for the OIDC user via kubelogin.
  • Anonymous access is denied, for example kubectl auth can-i list nodes --as=system:anonymous returns Forbidden.