Internal IdP Dashboard (OAuth2 Proxy)
This runbook configures Kubernetes Dashboard access via SSO using:
- Internal ZITADEL:
https://auth.reids.net.au - Dashboard URL:
https://dash.reids.net.au oauth2-proxyas the reverse proxy in front of the Dashboard- Secrets encrypted in Git using SOPS, decrypted by Flux at runtime
This assumes OIDC is already configured on the API server (issuer, username claim, groups claim, prefixes) and that RBAC is already configured for your OIDC groups.
It also assumes you already have:
ingress-nginxworking fordash.reids.net.au- Kubernetes Dashboard deployed in the
kubernetes-dashboardnamespace - A working SOPS age key available to Flux for decryption
Identity provider series
- IdP dual overview
- IdP dual architecture
- IdP internal deployment
- IdP internal console
- IdP internal SMTP
- IdP internal LDAP
- IdP internal OIDC
- IdP internal OAUTH2 proxy - you are here
- IdP internal backup and restore
1. Overview
Kubernetes Dashboard does not natively “know” how to do ZITADEL SSO. The pattern here is:
- Your browser hits
https://dash.reids.net.au. oauth2-proxyforces you through ZITADEL login (OIDC authorisation code flow).- After login,
oauth2-proxyallows the request through to Dashboard. - Any cluster authorisation is still controlled by Kubernetes RBAC.
oauth2-proxy uses a confidential OIDC client (client secret). This is not the same as the kubelogin client where PKCE is the correct choice.
You can enable PKCE (S256) in oauth2-proxy as defence in depth, but it still remains a confidential client and still uses a client secret.
Why Redis (session store)
To avoid cookie refresh issues:
- Deploy a small, in-cluster Redis instance in
identity-internal. - Store oauth2-proxy sessions in Redis to coordinate refresh and avoid refresh-token races.
- Keep secrets in Git via SOPS, decrypted by Flux at runtime.
- Add a minimal NetworkPolicy so only
oauth2-proxy-dashboardcan talk to Redis.
If you want refresh to work reliably, you must allow refresh tokens in ZITADEL and request offline_access. If you choose not to use refresh tokens, remove offline_access from OAUTH2_PROXY_SCOPE and expect to re-authenticate when the upstream token expires.
2. ZITADEL: create the kubernetes-dashboard application
In the internal ZITADEL console (https://auth.reids.net.au/ui/console):
- Go to
Projects → Kubernetes Infrastructure → Applications → New. - Choose Web.
- Set Name:
kubernetes-dashboard. - For authentication method, select Code (authorisation code flow with client secret).
- Redirect settings:
- Redirect URIs:
https://dash.reids.net.au/oauth2/callback - Post logout URIs:
https://dash.reids.net.au
- Redirect URIs:
- OIDC configuration:
- Grant Types: tick Authorization Code
- Refresh Token: tick this if you keep
offline_accessinOAUTH2_PROXY_SCOPE
- Token settings:
- Tick User roles inside ID Token
- Optionally tick User Info inside ID Token (useful for debugging)
- Generate a Client Secret and store it securely.
Record these for the next steps:
- Client ID (a numeric value, not the application name)
- Client Secret
User access is not “set in the Dashboard”. It is set via your normal Kubernetes RBAC (ClusterRoleBindings) based on the OIDC user and group claims. If a user can log in but sees Forbidden, that is RBAC doing its job.
3. Cluster: deploy OAuth2 Proxy in front of the Dashboard
This is implemented in the identity-internal repo under k8s/prod/.
3.1 Target repo structure
tree -a -L 7 -I '.git|.DS_Store|node_modules|.next|dist'
├── .gitignore
├── .sops.yaml
├── k8s
│ └── prod
│ ├── 10-helm-repository.yaml
│ ├── 20-secrets-db.enc.yaml
│ ├── 21-secrets-zitadel.enc.yaml
│ ├── 30-db-statefulset.yaml
│ ├── 50-helm-release-zitadel.yaml
│ ├── 60-ingress.yaml
│ ├── 60-secret-oauth2-proxy-dashboard.enc.yaml
│ ├── 70-oauth2-proxy-dashboard.yaml
│ ├── 71-redis-dashboard.enc.yaml
│ ├── 72-redis-dashboard.yaml
│ ├── 73-networkpolicy-redis-dashboard.yaml
│ └── kustomization.yaml
└── README.md
3.2 Create SOPS-encrypted secrets
How this works with SOPS and Flux:
- You commit the encrypted
*.enc.yamlfiles to Git. - Flux decrypts them at apply time using your cluster’s SOPS age key.
- Kubernetes only ever sees the decrypted Secrets at apply time.
3.2.1 OAUTH2_PROXY secret
You need three secret values:
OAUTH2_PROXY_CLIENT_ID(from the ZITADELkubernetes-dashboardapp)OAUTH2_PROXY_CLIENT_SECRET(from the same ZITADEL app)OAUTH2_PROXY_COOKIE_SECRET(must base64-decode to 16, 24, or 32 bytes)
The oauth2-proxy error refers to the decoded byte length, not the visible character count.
Use openssl rand -base64 32 to get a base64 string that decodes to 32 bytes.
- Generate cookie secret (good default):
OAUTH2_PROXY_COOKIE_SECRET="$(openssl rand -base64 32 | tr -d '\n')"
echo "$OAUTH2_PROXY_COOKIE_SECRET"
- Create the secret:
# k8s/prod/60-secret-oauth2-proxy-dashboard.enc.yaml
apiVersion: v1
kind: Secret
metadata:
name: oauth2-proxy-dashboard
namespace: identity-internal
type: Opaque
stringData:
# From ZITADEL app `kubernetes-dashboard`
OAUTH2_PROXY_CLIENT_ID: "REPLACE_ME"
OAUTH2_PROXY_CLIENT_SECRET: "REPLACE_ME"
OAUTH2_PROXY_COOKIE_SECRET: "REPLACE_ME"
- Encrypt the secret with SOPS (in-place):
sops -e -i k8s/prod/60-secret-oauth2-proxy-dashboard.enc.yaml
3.2.2 Redis secret
- Generate password:
INTERNAL_REDIS_PASSWORD="$(openssl rand -base64 32 | tr -d '\n')"
echo "$INTERNAL_REDIS_PASSWORD"
- Create Redis secret:
cat > k8s/prod/71-redis-dashboard.enc.yaml << 'EOF'
# k8s/prod/71-redis-dashboard.enc.yaml
apiVersion: v1
kind: Secret
metadata:
name: oauth2-proxy-dashboard-redis
namespace: identity-internal
type: Opaque
stringData:
REDIS_PASSWORD: REPLACE_ME
EOF
# Replace placeholder
perl -pi -e "s/REPLACE_ME/$INTERNAL_REDIS_PASSWORD/g" k8s/prod/71-redis-dashboard.enc.yaml
- Encrypt secret with SOPS:
sops -e -i k8s/prod/71-redis-dashboard.enc.yaml
3.3 Deploy oauth2-proxy + Service + Ingress
Create k8s/prod/70-oauth2-proxy-dashboard.yaml:
# k8s/prod/70-oauth2-proxy-dashboard.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: oauth2-proxy-dashboard
namespace: identity-internal
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy-dashboard
template:
metadata:
labels:
app: oauth2-proxy-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- name: tmp
emptyDir: {}
containers:
- name: oauth2-proxy
image: quay.io/oauth2-proxy/oauth2-proxy:v7.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4180
name: http
volumeMounts:
- name: tmp
mountPath: /tmp
env:
- name: OAUTH2_PROXY_PROVIDER
value: oidc
- name: OAUTH2_PROXY_OIDC_ISSUER_URL
value: https://auth.reids.net.au
- name: OAUTH2_PROXY_REDIRECT_URL
value: https://dash.reids.net.au/oauth2/callback
- name: OAUTH2_PROXY_EMAIL_DOMAINS
value: "*"
- name: OAUTH2_PROXY_SCOPE
value: "openid email profile offline_access"
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: 0.0.0.0:4180
# Dashboard upstream: Kong proxy service is TLS and listens on 443
- name: OAUTH2_PROXY_UPSTREAMS
value: https://kubernetes-dashboard-kong-proxy.kubernetes-dashboard.svc.cluster.local:443
- name: OAUTH2_PROXY_REVERSE_PROXY
value: "true"
- name: OAUTH2_PROXY_SKIP_PROVIDER_BUTTON
value: "true"
- name: OAUTH2_PROXY_CODE_CHALLENGE_METHOD
value: "S256"
# Session store: Redis
- name: OAUTH2_PROXY_SESSION_STORE_TYPE
value: redis
- name: OAUTH2_PROXY_REDIS_CONNECTION_URL
value: redis://oauth2-proxy-dashboard-redis.identity-internal.svc.cluster.local:6379
- name: OAUTH2_PROXY_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: oauth2-proxy-dashboard-redis
key: REDIS_PASSWORD
# Cookie naming: reduce collisions with other proxies
- name: OAUTH2_PROXY_COOKIE_NAME
value: "_oauth2_proxy_dash"
# Cookie hardening
- name: OAUTH2_PROXY_COOKIE_SECURE
value: "true"
- name: OAUTH2_PROXY_COOKIE_SAMESITE
value: "lax"
# IMPORTANT: cookie_refresh must be less than cookie_expire
#
# Practical defaults (internal admin UI):
# - EXPIRE 12h: your session lasts half a day idle.
# - REFRESH 1h: active browsing refreshes as you go.
#
# Note: refresh requires requests. If the tab is idle/suspended, no refresh happens.
- name: OAUTH2_PROXY_COOKIE_EXPIRE
value: "12h"
- name: OAUTH2_PROXY_COOKIE_REFRESH
value: "1h"
# Pass identity and tokens upstream
- name: OAUTH2_PROXY_SET_XAUTHREQUEST
value: "true"
- name: OAUTH2_PROXY_PASS_ACCESS_TOKEN
value: "true"
- name: OAUTH2_PROXY_PASS_AUTHORIZATION_HEADER
value: "true"
- name: OAUTH2_PROXY_SET_AUTHORIZATION_HEADER
value: "true"
# Kong in-cluster certs are often self-signed. Prefer fixing trust, but this works.
- name: OAUTH2_PROXY_SSL_UPSTREAM_INSECURE_SKIP_VERIFY
value: "true"
# Secrets
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
name: oauth2-proxy-dashboard
key: OAUTH2_PROXY_CLIENT_ID
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: oauth2-proxy-dashboard
key: OAUTH2_PROXY_CLIENT_SECRET
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
name: oauth2-proxy-dashboard
key: OAUTH2_PROXY_COOKIE_SECRET
readinessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 2
failureThreshold: 6
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 15
periodSeconds: 20
timeoutSeconds: 2
failureThreshold: 3
resources:
requests:
cpu: 25m
memory: 64Mi
limits:
cpu: 200m
memory: 256Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop: ["ALL"]
---
apiVersion: v1
kind: Service
metadata:
name: oauth2-proxy-dashboard
namespace: identity-internal
spec:
selector:
app: oauth2-proxy-dashboard
ports:
- name: http
port: 4180
targetPort: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dash
namespace: identity-internal
annotations:
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy-dashboard.identity-internal.svc.cluster.local:4180/oauth2/auth
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2/start?rd=$escaped_request_uri
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User,X-Auth-Request-Email,X-Auth-Request-Groups
spec:
ingressClassName: nginx
tls:
- hosts:
- dash.reids.net.au
rules:
- host: dash.reids.net.au
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: oauth2-proxy-dashboard
port:
number: 4180
3.4 Deploy Redis (StatefulSet + Service)
Create k8s/prod/72-redis-dashboard.yaml:
# k8s/prod/72-redis-dashboard.yaml
apiVersion: v1
kind: Service
metadata:
name: oauth2-proxy-dashboard-redis
namespace: identity-internal
labels:
app: oauth2-proxy-dashboard-redis
spec:
clusterIP: None
selector:
app: oauth2-proxy-dashboard-redis
ports:
- name: redis
port: 6379
targetPort: redis
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: oauth2-proxy-dashboard-redis
namespace: identity-internal
spec:
serviceName: oauth2-proxy-dashboard-redis
replicas: 1
selector:
matchLabels:
app: oauth2-proxy-dashboard-redis
template:
metadata:
labels:
app: oauth2-proxy-dashboard-redis
spec:
terminationGracePeriodSeconds: 30
securityContext:
seccompProfile:
type: RuntimeDefault
fsGroup: 999
fsGroupChangePolicy: OnRootMismatch
volumes:
- name: tmp
emptyDir: {}
containers:
- name: redis
image: docker.io/library/redis:8.2.3-alpine
imagePullPolicy: IfNotPresent
ports:
- name: redis
containerPort: 6379
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: oauth2-proxy-dashboard-redis
key: REDIS_PASSWORD
securityContext:
runAsNonRoot: true
runAsUser: 999
runAsGroup: 999
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
volumeMounts:
- name: data
mountPath: /data
- name: tmp
mountPath: /tmp
command:
- sh
- -ec
- |
exec redis-server --bind 0.0.0.0 --port 6379 --protected-mode yes --dir /data --appendonly yes --save 60 1 --requirepass "$REDIS_PASSWORD" --pidfile /tmp/redis.pid
readinessProbe:
exec:
command:
- sh
- -ec
- 'redis-cli -a "$REDIS_PASSWORD" ping | grep -q PONG'
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 2
failureThreshold: 6
livenessProbe:
exec:
command:
- sh
- -ec
- 'redis-cli -a "$REDIS_PASSWORD" ping | grep -q PONG'
initialDelaySeconds: 15
periodSeconds: 20
timeoutSeconds: 2
failureThreshold: 3
resources:
requests:
cpu: 25m
memory: 64Mi
limits:
cpu: 200m
memory: 256Mi
volumeClaimTemplates:
- metadata:
name: data
spec:
storageClassName: nfs-client
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
3.5 Restrict access with NetworkPolicy
Create k8s/prod/73-networkpolicy-redis-dashboard.yaml:
# k8s/prod/73-networkpolicy-redis-dashboard.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-oauth2-proxy-dashboard-to-redis
namespace: identity-internal
spec:
podSelector:
matchLabels:
app: oauth2-proxy-dashboard-redis
policyTypes: ["Ingress"]
ingress:
- from:
- podSelector:
matchLabels:
app: oauth2-proxy-dashboard
ports:
- protocol: TCP
port: 6379
3.6 Add resources to kustomization and deploy
Ensure the new files are referenced by k8s/prod/kustomization.yaml in the correct order:
- Secrets first
- Redis resources
- NetworkPolicy
- oauth2-proxy Deployment/Ingress
# k8s/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: identity-internal
resources:
- 10-helm-repository.yaml
- 20-secrets-db.enc.yaml
- 21-secrets-zitadel.enc.yaml
- 30-db-statefulset.yaml
- 50-helm-release-zitadel.yaml
- 60-ingress.yaml
- 60-secret-oauth2-proxy-dashboard.enc.yaml
- 71-redis-dashboard.enc.yaml
- 72-redis-dashboard.yaml
- 73-networkpolicy-redis-dashboard.yaml
- 70-oauth2-proxy-dashboard.yaml
Commit and push:
git add .
git commit -m "feat: protect dashboard with oauth2-proxy (ZITADEL)"
git push
Reconcile Flux:
flux reconcile source git flux-system
flux reconcile kustomization identity-internal --with-source
A Secret change does not automatically restart pods when the secret is consumed via environment variables.
After changing any secrets restart oauth2-proxy:
kubectl -n identity-internal rollout restart deploy/oauth2-proxy-dashboard
kubectl -n identity-internal rollout status deploy/oauth2-proxy-dashboard
4. Verification
4.1 Kubernetes objects
kubectl -n identity-internal get deploy,svc,ingress | rg -n "oauth2-proxy-dashboard|dash"
kubectl -n identity-internal rollout status deploy/oauth2-proxy-dashboard
kubectl -n identity-internal logs deploy/oauth2-proxy-dashboard --tail=200
4.2 Upstream sanity (service exists)
Confirm the Dashboard services in the kubernetes-dashboard namespace:
kubectl -n kubernetes-dashboard get svc -o wide
kubectl -n kubernetes-dashboard get endpoints
You should see kubernetes-dashboard-kong-proxy with port 443/TCP and endpoints on :8443.
4.3 DNS check (catch the “no such host” 502)
If oauth2-proxy shows:
lookup kubernetes-dashboard-kong-proxy.kubernetes-dashboard.svc.cluster.local ...: no such host
run a DNS check from inside the cluster:
kubectl -n identity-internal run -it --rm dnsutils --image=registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3 --restart=Never -- nslookup kubernetes-dashboard-kong-proxy.kubernetes-dashboard.svc.cluster.local
4.4 Browser flow
- Open an incognito window.
- Browse to
https://dash.reids.net.au. - You should be redirected to ZITADEL at
https://auth.reids.net.au. - Log in as a NAS-backed user.
- You should land on the Dashboard UI.
4.5 Clear session (test other users)
To force the login flow again (and test another user), use one of:
- Incognito/private window (fastest).
- Clear the site cookie on
dash.reids.net.aufor cookie name_oauth2_proxy_dash. - Visit the sign-out endpoint:
https://dash.reids.net.au/oauth2/sign_out
or add redirect at the end:
https://dash.reids.net.au/oauth2/sign_out?rd=/
You do not need anything past /oauth2/sign_out. The rd query string is optional and just improves UX.
4.6 Redis is reachable (from cluster)
kubectl -n identity-internal get pods,svc | rg -n "oauth2-proxy-dashboard-redis"
Optional: exec a quick ping:
REDIS_POD="$(kubectl -n identity-internal get pod -l app=oauth2-proxy-dashboard-redis -o name | head -n1)"
kubectl -n identity-internal exec -it "$REDIS_POD" -- sh -lc 'redis-cli -a "$REDIS_PASSWORD" PING'
Expected output: PONG
4.7 oauth2-proxy is using Redis
Check env (sanity):
kubectl -n identity-internal get deploy oauth2-proxy-dashboard -o yaml | rg -n "SESSION_STORE|REDIS_CONNECTION|REDIS_PASSWORD|COOKIE_EXPIRE|COOKIE_REFRESH|COOKIE_NAME"
Then monitor for refresh events:
kubectl -n identity-internal logs -f deploy/oauth2-proxy-dashboard | rg -n "AuthSuccess|Refreshing session|Unable to refresh session|invalid|error|warn"
5. Troubleshooting
5.1 Cookie secret length errors
Symptom:
cookie_secret must be 16, 24, or 32 bytes ... but is N bytes
Fix:
- Regenerate the cookie secret using
openssl rand -base64 32. - Update
k8s/prod/60-secret-oauth2-proxy-dashboard.enc.yaml, re-encrypt, commit, push. - Reconcile Flux, then restart the deployment.
5.2 “session ticket cookie failed validation” and session is removed
Symptom in logs:
Error loading cookied session: session ticket cookie failed validation ... removing session
Most common causes:
- You changed
OAUTH2_PROXY_COOKIE_SECRET(secret rotation) but your browser still has an old_oauth2_proxy_dashcookie. - You accidentally ran multiple oauth2-proxy instances for the same host with different cookie secrets.
- Cookie collisions (same cookie name used elsewhere, or a prior cookie still present).
Fix steps:
- Clear the
dash.reids.net.ausite cookies (or incognito), then retry login. - Confirm there is exactly one oauth2-proxy handling the host.
- Confirm the cookie name is unique (we set
_oauth2_proxy_dash).
5.3 502/500 after successful login (upstream DNS or service mismatch)
Symptom in oauth2-proxy logs:
Error proxying to upstream server: dial tcp: lookup ...: no such host
Fix:
- Confirm the real Dashboard service name and namespace (Section 4.2).
- Update
OAUTH2_PROXY_UPSTREAMSaccordingly. - Reconcile Flux and restart the deployment.
5.4 Forbidden after successful login
A Forbidden from Kubernetes after login means:
- Authentication worked (OIDC succeeded).
- Authorisation failed (RBAC is not granting the required permissions).
Check:
- The user’s ID token includes the expected
groups. - There is a
ClusterRoleBindingmapping the expected groups to roles.
5.5 zsh parse error near '|' when grepping logs
You likely typed a trailing escape before the pipe. Use:
kubectl -n identity-internal logs deploy/oauth2-proxy-dashboard --since=20m | rg -n "csrf|state|callback|set-cookie|session|save|error|warn|redis|decrypt|decode|invalid"
No double backslashes. The only line continuation is the single \ at end of the kubectl line.
6. Rollback
To remove SSO protection:
- Remove these resources from
k8s/prod/kustomization.yaml:60-secret-oauth2-proxy-dashboard.enc.yaml70-oauth2-proxy-dashboard.yaml
- Commit and push.
- Reconcile Flux.
flux reconcile kustomization identity-internal --with-source
To remove Redis:
- Remove Redis session config from oauth2-proxy (session store vars).
- Remove Redis StatefulSet/Service/NetworkPolicy and secret from kustomization.
- Reconcile Flux.
7. oauth2-proxy cookies, CSRF and session refresh (Dashboard + ZITADEL)
We are using:
dash.reids.net.aufor Kubernetes Dashboardauth.reids.net.aufor ZITADEL (OIDC provider)oauth2-proxyin front of Dashboard
In a working session you will observe:
/oauth2/authreturns202 Accepted/oauth2/userinforeturns your identity and groups/oauth2/callbackresponds with:
Set-Cookie: _oauth2_proxy_dash_csrf=; Path=/; Max-Age=0; HttpOnly; Secure; SameSite=Lax
Set-Cookie: _oauth2_proxy_dash=...; Path=/; Max-Age=...; HttpOnly; Secure; SameSite=Lax
That is the expected success path.
7.1 What the CSRF cookie is doing (and why it appears)
The CSRF cookie exists to protect the OIDC login flow itself.
When a session cookie is missing or invalid, oauth2-proxy starts a new login by redirecting you to the IdP. During that redirect flow it sets a CSRF cookie (example name: _oauth2_proxy_dash_csrf) and includes a matching value in the request state.
On the callback (/oauth2/callback), oauth2-proxy validates that the returned state matches the CSRF cookie value. If it matches:
- the CSRF cookie is cleared (
Max-Age=0) - the session cookie is set (
_oauth2_proxy_dash=...)
7.2 Why “CSRF-only” correlates with 401s
In a broken state, the pattern is:
_oauth2_proxy_dashnot present_oauth2_proxy_dash_csrfpresent- Dashboard XHR calls to
/api/v1/...return 401
That usually means the session cookie is missing or invalid, so auth requests fail.
7.3 Cookie expiry vs cookie refresh
Two oauth2-proxy parameters matter:
cookie_expire: how long the session cookie is validcookie_refresh: how often oauth2-proxy tries to refresh the cookie while requests are happening
If the browser tab is idle (or Chrome throttles or suspends background tabs), there may be no requests, so there is nothing to trigger a refresh. If the cookie expires during that time, you will get a re-login flow on next interaction.
7.4 Session expiry checks
Verification:
- Check the
_oauth2_proxy_dashcookie expiry after login. - Leave the tab idle longer than
cookie_expire. - Return and trigger a request (click a link or refresh). Expect a login.
Mitigation options (internal admin UI):
cookie_expire: 12handcookie_refresh: 1h(default in this doc)- If you often leave it overnight, use
cookie_expire: 24handcookie_refresh: 1h
7.5 If refresh fails, inspect the callback
Capture the response headers for /oauth2/callback:
- do you see
Set-Cookie: _oauth2_proxy_dash=...? - if yes, does the cookie appear in storage immediately afterwards?
- if no, show oauth2-proxy logs for
csrf,state,session,error
Also check:
https://dash.reids.net.au/oauth2/userinfo(should be 200 when logged in)https://dash.reids.net.au/oauth2/auth(should be 202 when logged in)
7.6 Reduce cookie collision risk
If you ever run more than one oauth2-proxy in the cluster, or you suspect stale cookies:
- Keep a unique cookie name per app (
_oauth2_proxy_dash). - Avoid setting a cookie domain that spans multiple subdomains unless you really mean it.
- Prefer one proxy instance per host, single replica unless you have shared session store and consistent config.
8. Verification checklist
-
kubernetes-dashboardapp exists in ZITADEL with redirecthttps://dash.reids.net.au/oauth2/callback. -
oauth2-proxy-dashboardDeployment is Ready and stable. -
https://dash.reids.net.auredirects tohttps://auth.reids.net.auwhen logged out. - After login, the Dashboard loads successfully via the configured upstream.
- RBAC controls what the user can see and do (admin vs read-only).
- You can clear sessions and test multiple users (incognito, cookie clear, or
/oauth2/sign_out). - Redis pod is running in
identity-internal(if using Redis). - NetworkPolicy only allows ingress to Redis from
oauth2-proxy-dashboard(if using NetworkPolicy). - oauth2-proxy logs show refresh attempts without repeated refresh-token errors.