Blaster Dockerfile and GitLab CI
This runbook covers how images are built for the Blaster game: the Dockerfile layout, the GitLab CI pipeline and the tag format used by Flux image automation.
Blaster GitOps series
- Blaster GitOps summary
- Blaster repo and branches
- Dockerfile & GitLab CI - you are here
- Clerk authentication & user setup
- Google OAuth for Clerk
- Blaster prep for automation
- Dev app k8s manifests
- Dev flux sources & Kustomizations
- Dev image automation
- Dev SOPS & age
- Dev verification & troubleshooting
- Dev full runbook
- Prod overview
- Prod app k8s manifests and deployment
- Prod Flux GitOps and image automation
- Prod Cloudflare, Origin CA and tunnel routing
- Prod full runbook
- Post development branches
1. Context
Blaster is a Next.js + Phaser game built with Node 22 and deployed to an on‑prem Kubernetes cluster.
The build chain is:
- Dockerfile (single file for dev and prod images).
- GitLab CI running on a Kubernetes runner.
- Kaniko for building and pushing images to:
registry.reids.net.au/games/blaster.
- FluxCD image automation reading tags to keep k8s dev in sync.
This document assumes:
- App repo:
games/blaster. - Infra repo:
fluxgitops/flux-config. - Registry:
registry.reids.net.au.
For repo and branch details see Blaster repo and branches.
2. Dockerfile design
2.1 Goals
- Single Dockerfile for both dev and prod images.
- Build with Node 22 on Alpine for small images.
- Multi‑stage build:
- Stage 1: install dev dependencies and run
next build. - Stage 2: slim runtime with only built artefacts.
- Stage 1: install dev dependencies and run
- Accept build‑time arguments for public config (for example Clerk public key).
2.2 Dockerfile
Updated 07/12/2025 due to CVE-2025-55182 (React Server Components RCE): Blaster impact assessment and remediation runbook.
File: games/blaster/Dockerfile
# syntax=docker/dockerfile:1.7
FROM node:22-alpine3.20 AS build
WORKDIR /app
ARG APP_ENV=prod
ENV APP_ENV=${APP_ENV}
ARG NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY
ENV NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=${NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY}
ENV NEXT_TELEMETRY_DISABLED=1
COPY package*.json ./
RUN npm ci --include=dev
COPY . .
RUN npm run build
FROM node:22-alpine3.20 AS runtime
WORKDIR /app
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
# Install prod deps only (tsx + dotenv must be in dependencies for initContainer migrations)
COPY --from=build /app/package*.json ./
RUN npm ci --omit=dev
# Runtime artefacts
COPY --from=build /app/.next ./.next
COPY --from=build /app/public ./public
# Migration runner inputs
COPY --from=build /app/db ./db
COPY --from=build /app/lib ./lib
COPY --from=build /app/scripts ./scripts
# Next config (present in your repo)
COPY --from=build /app/next.config.mjs ./next.config.mjs
CMD ["npm", "run", "start"]
Key points:
npm ci --include=devensures Tailwind, PostCSS and ESLint are present fornext build.- The runtime stage starts clean; only built files and runtime deps are copied.
- Runtime config (database, secrets, etc) is provided by Kubernetes via
envandenvFrom.
3. Image tag strategy
Images are tagged to make them:
- Unique per pipeline run.
- Sortable by Flux image policies.
3.1 Tag format
For the games/blaster project:
-
On
develop(k8s dev):registry.reids.net.au/games/blaster:dev-YYYYMMDD.N
-
On
main(k8s prod):registry.reids.net.au/games/blaster:prod-YYYYMMDD.N
Where:
YYYYMMDDis the date of the pipeline.Nis the GitLab pipeline IID ($CI_PIPELINE_IID).
Flux ImageRepository and ImagePolicy resources in the infra repo select the latest dev- tag for k8s dev. Prod can be pinned manually, or follow a similar policy once you choose.
4. GitLab CI overview
File: games/blaster/.gitlab-ci.yml
High‑level stages:
notify- Optional Slack notification at pipeline start and end.
lint- Run ESLint (or other linters).
test- Run unit tests or basic checks.
build- Use Kaniko to build and push images for
developandmain.
- Use Kaniko to build and push images for
notify_end- Slack notification on success or failure.
The CI file relies on GitLab‑provided variables:
$CI_REGISTRY$CI_REGISTRY_IMAGE$CI_REGISTRY_USER$CI_REGISTRY_PASSWORD
And your own variables (for example Slack and Clerk).
5. Required CI/CD variables
In the games/blaster GitLab project, under Settings → CI/CD → Variables, define:
| Name | Example / Notes | Protected |
|---|---|---|
CLERK_KEY_DEV | Your Clerk dev publishable key | No |
CLERK_KEY_PROD | Your Clerk prod publishable key | No |
SLACK_NOTIFY | true or false | No |
SLACK_BOT_TOKEN | xoxb-… bot token | No |
SLACK_CHANNEL | Channel ID such as C0123ABCD | No |
GitLab injects the registry values automatically when the project has the Container Registry enabled.
Initially I had just the Clerk dev publishable key as a variable for the build process but when I moved into production with Clerk I added a separate key for prod and updated the GitLab CI yaml file.
6. GitLab CI file in detail
6.1 Stages and common variables
stages:
- notify
- lint
- test
- build
- notify_end
variables:
# GitLab project image, e.g. registry.reids.net.au/games/blaster
DOCKER_IMAGE: "$CI_REGISTRY_IMAGE"
DOCKER_IMAGE gives a stable base for tagging, such as registry.reids.net.au/games/blaster.
6.2 Slack notifications (optional)
The Slack jobs are optional and only run when SLACK_NOTIFY == "true".
Start‑of‑pipeline notification:
notify:start:
stage: notify
image: alpine:3.20
rules:
- if: '$SLACK_NOTIFY == "true"'
before_script:
- apk add --no-cache curl jq
script:
- |
MSG="*Pipeline* ${CI_PIPELINE_URL} for *${CI_PROJECT_PATH}* started on ${CI_COMMIT_REF_NAME} by ${GITLAB_USER_LOGIN}."
MORE="${CI_COMMIT_SHORT_SHA}: $(echo "$CI_COMMIT_TITLE" | head -c 120)"
jq -n --arg ch "$SLACK_CHANNEL" --arg msg "$MSG" --arg more "$MORE" '{channel:$ch, text:$msg,
blocks:[
{ "type":"section","text":{"type":"mrkdwn","text":$msg}},
{ "type":"context","elements":[{"type":"mrkdwn","text":$more}]}
]}' > payload.json
curl -sS -H "Authorization: Bearer $SLACK_BOT_TOKEN" -H "Content-type: application/json" -X POST https://slack.com/api/chat.postMessage --data @payload.json | tee slack.resp.json
jq -r '.ts // empty' slack.resp.json | tee slack.ts
artifacts:
paths:
- slack.ts
- slack.resp.json
expire_in: 1 day
End‑of‑pipeline notifications (notify:success and notify:failure) reuse slack.ts to thread messages.
You can remove these jobs entirely if Slack is not desired.
6.3 Lint and test jobs
lint:
stage: lint
image: node:22-alpine3.20
script:
- npm ci
- npm run lint
test:
stage: test
image: node:22-alpine3.20
script:
- npm ci
- npm test # ensure package.json has a "test" script
Notes:
npm ciis re‑run in each job for isolation.- You can cache
node_modulesif needed, but keep it simple unless builds become slow.
6.4 Kaniko build for develop (dev images)
build:develop:
stage: build
rules:
# 1. Skip when commit message contains [skip ci]
- if: '$CI_COMMIT_MESSAGE =~ /\[skip ci\]/i'
when: never
# 2. Run on every commit to develop
- if: '$CI_COMMIT_BRANCH == "develop"'
when: on_success
# 3. Fallback: never
- when: never
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- export DATE_TAG="$(date +%Y%m%d).${CI_PIPELINE_IID}"
- export TAG="dev-${DATE_TAG}"
- export IMAGE="${DOCKER_IMAGE}:${TAG}"
- echo "Building tag ${IMAGE} with Kaniko"
- mkdir -p /kaniko/.docker
- |
cat <<EOF >/kaniko/.docker/config.json
{
"auths": {
"${CI_REGISTRY}": {
"username": "${CI_REGISTRY_USER}",
"password": "${CI_REGISTRY_PASSWORD}"
}
}
}
EOF
- |
/kaniko/executor --context "${CI_PROJECT_DIR}" --dockerfile "${CI_PROJECT_DIR}/Dockerfile" --destination "${IMAGE}" --cache=false --build-arg NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY="${NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY}"
What this does:
- Only runs on the
developbranch. - Generates a tag such as
dev-20251115.49. - Writes a Docker config for registry auth using GitLab‑provided credentials.
- Calls Kaniko with:
- Build context at the repo root.
- The root
Dockerfile. - Destination set to
DOCKER_IMAGE:TAG. - Build argument for
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY.
6.5 Kaniko build for main (prod images)
build:main:
stage: build
rules:
- if: '$CI_COMMIT_MESSAGE =~ /\[skip ci\]/i'
when: never
- if: '$CI_COMMIT_BRANCH == "main"'
when: on_success
- when: never
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- export DATE_TAG="$(date +%Y%m%d).${CI_PIPELINE_IID}"
- export TAG="prod-${DATE_TAG}"
- export IMAGE="${DOCKER_IMAGE}:${TAG}"
- echo "Building tag ${IMAGE} with Kaniko"
- mkdir -p /kaniko/.docker
- |
cat <<EOF >/kaniko/.docker/config.json
{
"auths": {
"${CI_REGISTRY}": {
"username": "${CI_REGISTRY_USER}",
"password": "${CI_REGISTRY_PASSWORD}"
}
}
}
EOF
- |
/kaniko/executor --context "${CI_PROJECT_DIR}" --dockerfile "${CI_PROJECT_DIR}/Dockerfile" --destination "${IMAGE}" --cache=false --build-arg NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY="${NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY}"
This is symmetrical to build:develop, but creates prod-YYYYMMDD.N tags for the main branch.
7. Registry and authentication
7.1 GitLab registry
Requirements:
- The
games/blasterproject has the Container Registry enabled. - The GitLab runner is configured with proper access to the registry.
GitLab injects:
$CI_REGISTRY(for exampleregistry.reids.net.au).$CI_REGISTRY_IMAGE(registry.reids.net.au/games/blaster).$CI_REGISTRY_USERand$CI_REGISTRY_PASSWORD.
Kaniko uses these to authenticate via /kaniko/.docker/config.json.
7.2 Kubernetes pull secrets
On the cluster, workloads pull from the registry using:
- A
docker-registrySecret such asblaster-dev-registry. imagePullSecretsin the Deployment spec.
This is separate from GitLab CI; see the k8s manifests runbook for the exact secret creation steps.
8. Verifying builds
8.1 Pipeline output
On a successful dev build you should see a log similar to:
Building tag registry.reids.net.au/games/blaster:dev-20251115.49 with Kaniko
INFO[0036] CMD ["npm", "run", "start"]
INFO[0036] Pushing image to registry.reids.net.au/games/blaster:dev-20251115.49
INFO[0071] Pushed registry.reids.net.au/games/blaster@sha256:...
Job succeeded
For prod builds, the tag will start with prod-.
8.2 Container registry
In GitLab:
- Open the
games/blasterproject. - Go to Packages and Registries → Container Registry.
- Confirm tags such as:
dev-20251115.49prod-20251115.11
If tags are not present:
- Check that the
build:*jobs are running. - Verify the registry is enabled and accessible.
9. How Flux uses the tags
Flux image automation in the infra repo:
ImageRepositorypoints atregistry.reids.net.au/games/blaster.ImagePolicyfilters tags matching^dev-(?P<ts>[0-9]{8}\.[0-9]+)$and picks the latest.ImageUpdateAutomationupdates the image field ink8s/dev/50-app-deployment.yaml.
The deployment YAML includes an image annotation:
containers:
- name: blaster
image: registry.reids.net.au/games/blaster:dev-20251115.42 # {"$imagepolicy": "flux-system:blaster-dev-policy"}
Flux updates the image tag in Git, creating commits such as:
registry.reids.net.au/games/blaster:dev-20251115.49 -> registry.reids.net.au/games/blaster:dev-20251115.51 [skip ci]
This is what keeps k8s dev in sync with the latest dev image.
Complete .gitlab-ci.yml
Updated 07/12/2025 due to CVE-2025-55182 (React Server Components RCE): Blaster impact assessment and remediation runbook.
stages:
- lint
- test
- build
variables:
# GitLab project image, e.g. registry.reids.net.au/games/blaster
DOCKER_IMAGE: "$CI_REGISTRY_IMAGE"
# --------------------------------------------------------------------
# Lint and test (no container build)
# --------------------------------------------------------------------
lint:
stage: lint
image: node:22-alpine3.20
script:
- npm run ci:install
- npm run lint
test:
stage: test
image: node:22-alpine3.20
script:
- npm run ci:install
- npm test # your package.json now has a "test" script
# --------------------------------------------------------------------
# Kaniko builds - no docker daemon, good for Kubernetes runners
# Tag format for Flux:
# dev: registry.reids.net.au/games/blaster:dev-YYYYMMDD.IID
# prod: registry.reids.net.au/games/blaster:prod-YYYYMMDD.IID
# --------------------------------------------------------------------
# Build and push for develop (k8s dev)
build:develop:
stage: build
rules:
# 1. If commit message contains [skip ci], do not run this job
- if: '$CI_COMMIT_MESSAGE =~ /\\[skip ci\\]/i'
when: never
# 2. Run on every commit to develop
- if: '$CI_COMMIT_BRANCH == "develop"'
when: on_success
# 3. Fallback: never
- when: never
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- export DATE_TAG="$(date +%Y%m%d).${CI_PIPELINE_IID}"
- export TAG="dev-${DATE_TAG}"
- export IMAGE="${DOCKER_IMAGE}:${TAG}"
- echo "Building tag ${IMAGE} with Kaniko"
- mkdir -p /kaniko/.docker
- |
cat <<EOF >/kaniko/.docker/config.json
{
"auths": {
"${CI_REGISTRY}": {
"username": "${CI_REGISTRY_USER}",
"password": "${CI_REGISTRY_PASSWORD}"
}
}
}
EOF
- |
# Use dev Clerk key for develop branch
echo "Building with dev Clerk key for develop branch"
/kaniko/executor \
--context "${CI_PROJECT_DIR}" \
--dockerfile "${CI_PROJECT_DIR}/Dockerfile" \
--destination "${IMAGE}" \
--cache=false \
--build-arg NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY="${CLERK_KEY_DEV}"
# Build and push for main (k8s prod)
build:main:
stage: build
rules:
- if: '$CI_COMMIT_MESSAGE =~ /\\[skip ci\\]/i'
when: never
- if: '$CI_COMMIT_BRANCH == "main"'
when: on_success
- when: never
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- export DATE_TAG="$(date +%Y%m%d).${CI_PIPELINE_IID}"
- export TAG="prod-${DATE_TAG}"
- export IMAGE="${DOCKER_IMAGE}:${TAG}"
- echo "Building tag ${IMAGE} with Kaniko"
- mkdir -p /kaniko/.docker
- |
cat <<EOF >/kaniko/.docker/config.json
{
"auths": {
"${CI_REGISTRY}": {
"username": "${CI_REGISTRY_USER}",
"password": "${CI_REGISTRY_PASSWORD}"
}
}
}
EOF
- |
# Use prod Clerk key for main branch
echo "Building with prod Clerk key for main branch"
/kaniko/executor \
--context "${CI_PROJECT_DIR}" \
--dockerfile "${CI_PROJECT_DIR}/Dockerfile" \
--destination "${IMAGE}" \
--cache=false \
--build-arg NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY="${CLERK_KEY_PROD}"
10. Troubleshooting
Common issues:
- Kaniko build fails on
npm ci- Check
package-lock.jsonis up to date. - Ensure Node version in Dockerfile matches the one used locally.
- Check
- Kaniko build fails on
npm run build- Run
npm run buildlocally first and fix ESLint or type errors.
- Run
- Image pushes fail
- Confirm Container Registry is enabled.
- Confirm the runner has access and that
$CI_REGISTRY_*variables are set.
- Tags missing in registry
- Ensure the
build:*jobs are not skipped (no[skip ci]in the commit message). - Check CI
ruleslogic still matches your branch names.
- Ensure the
11. Verification checklist
- Dockerfile builds successfully with
docker build .or local Kaniko. -
.gitlab-ci.ymldefineslint,testandbuildstages. -
build:developruns on thedevelopbranch and createsdev-YYYYMMDD.Ntags. -
build:mainruns on themainbranch and createsprod-YYYYMMDD.Ntags. - Tags appear in the GitLab Container Registry for
games/blaster. - Flux image automation in the infra repo can see and select the latest
dev-tag.
Once this checklist passes, the Blaster Dockerfile and GitLab CI pipeline are ready to support GitOps deployments via Flux.