Skip to main content

Blaster Dockerfile and GitLab CI

info

This runbook covers how images are built for the Blaster game: the Dockerfile layout, the GitLab CI pipeline and the tag format used by Flux image automation.

Blaster GitOps series

  1. Blaster GitOps summary
  2. Blaster repo and branches
  3. Dockerfile & GitLab CI - you are here
  4. Clerk authentication & user setup
  5. Google OAuth for Clerk
  6. Blaster prep for automation
  7. Dev app k8s manifests
  8. Dev flux sources & Kustomizations
  9. Dev image automation
  10. Dev SOPS & age
  11. Dev verification & troubleshooting
  12. Dev full runbook
  13. Prod overview
  14. Prod app k8s manifests and deployment
  15. Prod Flux GitOps and image automation
  16. Prod Cloudflare, Origin CA and tunnel routing
  17. Prod full runbook
  18. Post development branches

1. Context

Blaster is a Next.js + Phaser game built with Node 22 and deployed to an on‑prem Kubernetes cluster.

The build chain is:

  • Dockerfile (single file for dev and prod images).
  • GitLab CI running on a Kubernetes runner.
  • Kaniko for building and pushing images to:
    • registry.reids.net.au/games/blaster.
  • FluxCD image automation reading tags to keep k8s dev in sync.

This document assumes:

  • App repo: games/blaster.
  • Infra repo: fluxgitops/flux-config.
  • Registry: registry.reids.net.au.

For repo and branch details see Blaster repo and branches.


2. Dockerfile design

2.1 Goals

  • Single Dockerfile for both dev and prod images.
  • Build with Node 22 on Alpine for small images.
  • Multi‑stage build:
    • Stage 1: install dev dependencies and run next build.
    • Stage 2: slim runtime with only built artefacts.
  • Accept build‑time arguments for public config (for example Clerk public key).

2.2 Dockerfile

note

Updated 07/12/2025 due to CVE-2025-55182 (React Server Components RCE): Blaster impact assessment and remediation runbook.

File: games/blaster/Dockerfile

# syntax=docker/dockerfile:1.7

FROM node:22-alpine3.20 AS build
WORKDIR /app

ARG APP_ENV=prod
ENV APP_ENV=${APP_ENV}

ARG NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY
ENV NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=${NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY}

ENV NEXT_TELEMETRY_DISABLED=1

COPY package*.json ./
RUN npm ci --include=dev

COPY . .
RUN npm run build

FROM node:22-alpine3.20 AS runtime
WORKDIR /app

ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1

# Install prod deps only (tsx + dotenv must be in dependencies for initContainer migrations)
COPY --from=build /app/package*.json ./
RUN npm ci --omit=dev

# Runtime artefacts
COPY --from=build /app/.next ./.next
COPY --from=build /app/public ./public

# Migration runner inputs
COPY --from=build /app/db ./db
COPY --from=build /app/lib ./lib
COPY --from=build /app/scripts ./scripts

# Next config (present in your repo)
COPY --from=build /app/next.config.mjs ./next.config.mjs

CMD ["npm", "run", "start"]

Key points:

  • npm ci --include=dev ensures Tailwind, PostCSS and ESLint are present for next build.
  • The runtime stage starts clean; only built files and runtime deps are copied.
  • Runtime config (database, secrets, etc) is provided by Kubernetes via env and envFrom.

3. Image tag strategy

Images are tagged to make them:

  • Unique per pipeline run.
  • Sortable by Flux image policies.

3.1 Tag format

For the games/blaster project:

  • On develop (k8s dev):

    • registry.reids.net.au/games/blaster:dev-YYYYMMDD.N
  • On main (k8s prod):

    • registry.reids.net.au/games/blaster:prod-YYYYMMDD.N

Where:

  • YYYYMMDD is the date of the pipeline.
  • N is the GitLab pipeline IID ($CI_PIPELINE_IID).

Flux ImageRepository and ImagePolicy resources in the infra repo select the latest dev- tag for k8s dev. Prod can be pinned manually, or follow a similar policy once you choose.


4. GitLab CI overview

File: games/blaster/.gitlab-ci.yml

High‑level stages:

  1. notify
    • Optional Slack notification at pipeline start and end.
  2. lint
    • Run ESLint (or other linters).
  3. test
    • Run unit tests or basic checks.
  4. build
    • Use Kaniko to build and push images for develop and main.
  5. notify_end
    • Slack notification on success or failure.

The CI file relies on GitLab‑provided variables:

  • $CI_REGISTRY
  • $CI_REGISTRY_IMAGE
  • $CI_REGISTRY_USER
  • $CI_REGISTRY_PASSWORD

And your own variables (for example Slack and Clerk).


5. Required CI/CD variables

In the games/blaster GitLab project, under Settings → CI/CD → Variables, define:

NameExample / NotesProtected
CLERK_KEY_DEVYour Clerk dev publishable keyNo
CLERK_KEY_PRODYour Clerk prod publishable keyNo
SLACK_NOTIFYtrue or falseNo
SLACK_BOT_TOKENxoxb-… bot tokenNo
SLACK_CHANNELChannel ID such as C0123ABCDNo

GitLab injects the registry values automatically when the project has the Container Registry enabled.

note

Initially I had just the Clerk dev publishable key as a variable for the build process but when I moved into production with Clerk I added a separate key for prod and updated the GitLab CI yaml file.


6. GitLab CI file in detail

6.1 Stages and common variables

stages:
- notify
- lint
- test
- build
- notify_end

variables:
# GitLab project image, e.g. registry.reids.net.au/games/blaster
DOCKER_IMAGE: "$CI_REGISTRY_IMAGE"

DOCKER_IMAGE gives a stable base for tagging, such as registry.reids.net.au/games/blaster.

6.2 Slack notifications (optional)

The Slack jobs are optional and only run when SLACK_NOTIFY == "true".

Start‑of‑pipeline notification:

notify:start:
stage: notify
image: alpine:3.20
rules:
- if: '$SLACK_NOTIFY == "true"'
before_script:
- apk add --no-cache curl jq
script:
- |
MSG="*Pipeline* ${CI_PIPELINE_URL} for *${CI_PROJECT_PATH}* started on ${CI_COMMIT_REF_NAME} by ${GITLAB_USER_LOGIN}."
MORE="${CI_COMMIT_SHORT_SHA}: $(echo "$CI_COMMIT_TITLE" | head -c 120)"
jq -n --arg ch "$SLACK_CHANNEL" --arg msg "$MSG" --arg more "$MORE" '{channel:$ch, text:$msg,
blocks:[
{ "type":"section","text":{"type":"mrkdwn","text":$msg}},
{ "type":"context","elements":[{"type":"mrkdwn","text":$more}]}
]}' > payload.json
curl -sS -H "Authorization: Bearer $SLACK_BOT_TOKEN" -H "Content-type: application/json" -X POST https://slack.com/api/chat.postMessage --data @payload.json | tee slack.resp.json
jq -r '.ts // empty' slack.resp.json | tee slack.ts
artifacts:
paths:
- slack.ts
- slack.resp.json
expire_in: 1 day

End‑of‑pipeline notifications (notify:success and notify:failure) reuse slack.ts to thread messages.

You can remove these jobs entirely if Slack is not desired.

6.3 Lint and test jobs

lint:
stage: lint
image: node:22-alpine3.20
script:
- npm ci
- npm run lint

test:
stage: test
image: node:22-alpine3.20
script:
- npm ci
- npm test # ensure package.json has a "test" script

Notes:

  • npm ci is re‑run in each job for isolation.
  • You can cache node_modules if needed, but keep it simple unless builds become slow.

6.4 Kaniko build for develop (dev images)

build:develop:
stage: build
rules:
# 1. Skip when commit message contains [skip ci]
- if: '$CI_COMMIT_MESSAGE =~ /\[skip ci\]/i'
when: never
# 2. Run on every commit to develop
- if: '$CI_COMMIT_BRANCH == "develop"'
when: on_success
# 3. Fallback: never
- when: never
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- export DATE_TAG="$(date +%Y%m%d).${CI_PIPELINE_IID}"
- export TAG="dev-${DATE_TAG}"
- export IMAGE="${DOCKER_IMAGE}:${TAG}"
- echo "Building tag ${IMAGE} with Kaniko"
- mkdir -p /kaniko/.docker
- |
cat <<EOF >/kaniko/.docker/config.json
{
"auths": {
"${CI_REGISTRY}": {
"username": "${CI_REGISTRY_USER}",
"password": "${CI_REGISTRY_PASSWORD}"
}
}
}
EOF
- |
/kaniko/executor --context "${CI_PROJECT_DIR}" --dockerfile "${CI_PROJECT_DIR}/Dockerfile" --destination "${IMAGE}" --cache=false --build-arg NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY="${NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY}"

What this does:

  • Only runs on the develop branch.
  • Generates a tag such as dev-20251115.49.
  • Writes a Docker config for registry auth using GitLab‑provided credentials.
  • Calls Kaniko with:
    • Build context at the repo root.
    • The root Dockerfile.
    • Destination set to DOCKER_IMAGE:TAG.
    • Build argument for NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY.

6.5 Kaniko build for main (prod images)

build:main:
stage: build
rules:
- if: '$CI_COMMIT_MESSAGE =~ /\[skip ci\]/i'
when: never
- if: '$CI_COMMIT_BRANCH == "main"'
when: on_success
- when: never
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- export DATE_TAG="$(date +%Y%m%d).${CI_PIPELINE_IID}"
- export TAG="prod-${DATE_TAG}"
- export IMAGE="${DOCKER_IMAGE}:${TAG}"
- echo "Building tag ${IMAGE} with Kaniko"
- mkdir -p /kaniko/.docker
- |
cat <<EOF >/kaniko/.docker/config.json
{
"auths": {
"${CI_REGISTRY}": {
"username": "${CI_REGISTRY_USER}",
"password": "${CI_REGISTRY_PASSWORD}"
}
}
}
EOF
- |
/kaniko/executor --context "${CI_PROJECT_DIR}" --dockerfile "${CI_PROJECT_DIR}/Dockerfile" --destination "${IMAGE}" --cache=false --build-arg NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY="${NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY}"

This is symmetrical to build:develop, but creates prod-YYYYMMDD.N tags for the main branch.


7. Registry and authentication

7.1 GitLab registry

Requirements:

  • The games/blaster project has the Container Registry enabled.
  • The GitLab runner is configured with proper access to the registry.

GitLab injects:

  • $CI_REGISTRY (for example registry.reids.net.au).
  • $CI_REGISTRY_IMAGE (registry.reids.net.au/games/blaster).
  • $CI_REGISTRY_USER and $CI_REGISTRY_PASSWORD.

Kaniko uses these to authenticate via /kaniko/.docker/config.json.

7.2 Kubernetes pull secrets

On the cluster, workloads pull from the registry using:

  • A docker-registry Secret such as blaster-dev-registry.
  • imagePullSecrets in the Deployment spec.

This is separate from GitLab CI; see the k8s manifests runbook for the exact secret creation steps.


8. Verifying builds

8.1 Pipeline output

On a successful dev build you should see a log similar to:

Building tag registry.reids.net.au/games/blaster:dev-20251115.49 with Kaniko
INFO[0036] CMD ["npm", "run", "start"]
INFO[0036] Pushing image to registry.reids.net.au/games/blaster:dev-20251115.49
INFO[0071] Pushed registry.reids.net.au/games/blaster@sha256:...
Job succeeded

For prod builds, the tag will start with prod-.

8.2 Container registry

In GitLab:

  1. Open the games/blaster project.
  2. Go to Packages and Registries → Container Registry.
  3. Confirm tags such as:
    • dev-20251115.49
    • prod-20251115.11

If tags are not present:

  • Check that the build:* jobs are running.
  • Verify the registry is enabled and accessible.

9. How Flux uses the tags

Flux image automation in the infra repo:

  • ImageRepository points at registry.reids.net.au/games/blaster.
  • ImagePolicy filters tags matching ^dev-(?P<ts>[0-9]{8}\.[0-9]+)$ and picks the latest.
  • ImageUpdateAutomation updates the image field in k8s/dev/50-app-deployment.yaml.

The deployment YAML includes an image annotation:

containers:
- name: blaster
image: registry.reids.net.au/games/blaster:dev-20251115.42 # {"$imagepolicy": "flux-system:blaster-dev-policy"}

Flux updates the image tag in Git, creating commits such as:

registry.reids.net.au/games/blaster:dev-20251115.49 -> registry.reids.net.au/games/blaster:dev-20251115.51 [skip ci]

This is what keeps k8s dev in sync with the latest dev image.


Complete .gitlab-ci.yml

note

Updated 07/12/2025 due to CVE-2025-55182 (React Server Components RCE): Blaster impact assessment and remediation runbook.

stages:
- lint
- test
- build

variables:
# GitLab project image, e.g. registry.reids.net.au/games/blaster
DOCKER_IMAGE: "$CI_REGISTRY_IMAGE"

# --------------------------------------------------------------------
# Lint and test (no container build)
# --------------------------------------------------------------------

lint:
stage: lint
image: node:22-alpine3.20
script:
- npm run ci:install
- npm run lint

test:
stage: test
image: node:22-alpine3.20
script:
- npm run ci:install
- npm test # your package.json now has a "test" script

# --------------------------------------------------------------------
# Kaniko builds - no docker daemon, good for Kubernetes runners
# Tag format for Flux:
# dev: registry.reids.net.au/games/blaster:dev-YYYYMMDD.IID
# prod: registry.reids.net.au/games/blaster:prod-YYYYMMDD.IID
# --------------------------------------------------------------------

# Build and push for develop (k8s dev)
build:develop:
stage: build
rules:
# 1. If commit message contains [skip ci], do not run this job
- if: '$CI_COMMIT_MESSAGE =~ /\\[skip ci\\]/i'
when: never
# 2. Run on every commit to develop
- if: '$CI_COMMIT_BRANCH == "develop"'
when: on_success
# 3. Fallback: never
- when: never
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- export DATE_TAG="$(date +%Y%m%d).${CI_PIPELINE_IID}"
- export TAG="dev-${DATE_TAG}"
- export IMAGE="${DOCKER_IMAGE}:${TAG}"
- echo "Building tag ${IMAGE} with Kaniko"
- mkdir -p /kaniko/.docker
- |
cat <<EOF >/kaniko/.docker/config.json
{
"auths": {
"${CI_REGISTRY}": {
"username": "${CI_REGISTRY_USER}",
"password": "${CI_REGISTRY_PASSWORD}"
}
}
}
EOF
- |
# Use dev Clerk key for develop branch
echo "Building with dev Clerk key for develop branch"
/kaniko/executor \
--context "${CI_PROJECT_DIR}" \
--dockerfile "${CI_PROJECT_DIR}/Dockerfile" \
--destination "${IMAGE}" \
--cache=false \
--build-arg NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY="${CLERK_KEY_DEV}"

# Build and push for main (k8s prod)
build:main:
stage: build
rules:
- if: '$CI_COMMIT_MESSAGE =~ /\\[skip ci\\]/i'
when: never
- if: '$CI_COMMIT_BRANCH == "main"'
when: on_success
- when: never
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- export DATE_TAG="$(date +%Y%m%d).${CI_PIPELINE_IID}"
- export TAG="prod-${DATE_TAG}"
- export IMAGE="${DOCKER_IMAGE}:${TAG}"
- echo "Building tag ${IMAGE} with Kaniko"
- mkdir -p /kaniko/.docker
- |
cat <<EOF >/kaniko/.docker/config.json
{
"auths": {
"${CI_REGISTRY}": {
"username": "${CI_REGISTRY_USER}",
"password": "${CI_REGISTRY_PASSWORD}"
}
}
}
EOF
- |
# Use prod Clerk key for main branch
echo "Building with prod Clerk key for main branch"
/kaniko/executor \
--context "${CI_PROJECT_DIR}" \
--dockerfile "${CI_PROJECT_DIR}/Dockerfile" \
--destination "${IMAGE}" \
--cache=false \
--build-arg NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY="${CLERK_KEY_PROD}"

10. Troubleshooting

Common issues:

  • Kaniko build fails on npm ci
    • Check package-lock.json is up to date.
    • Ensure Node version in Dockerfile matches the one used locally.
  • Kaniko build fails on npm run build
    • Run npm run build locally first and fix ESLint or type errors.
  • Image pushes fail
    • Confirm Container Registry is enabled.
    • Confirm the runner has access and that $CI_REGISTRY_* variables are set.
  • Tags missing in registry
    • Ensure the build:* jobs are not skipped (no [skip ci] in the commit message).
    • Check CI rules logic still matches your branch names.

11. Verification checklist

  • Dockerfile builds successfully with docker build . or local Kaniko.
  • .gitlab-ci.yml defines lint, test and build stages.
  • build:develop runs on the develop branch and creates dev-YYYYMMDD.N tags.
  • build:main runs on the main branch and creates prod-YYYYMMDD.N tags.
  • Tags appear in the GitLab Container Registry for games/blaster.
  • Flux image automation in the infra repo can see and select the latest dev- tag.

Once this checklist passes, the Blaster Dockerfile and GitLab CI pipeline are ready to support GitOps deployments via Flux.