Skip to main content

React Server Components RCE

note

CVE-2025-55182

Blaster impact assessment, remediation, and verification runbook

This is my single, ordered record of what I checked, what I changed, why I changed it, what I intentionally did not change, and how I verified the final state for Blaster in both dev and prod Kubernetes namespaces.

It assumes:

  • Repo: games/blaster
  • Runtime: Kubernetes deployments in namespaces blaster-dev (dev) and blaster (prod)
  • Image build: GitLab CI + Kaniko
  • Delivery: Flux image automation updates the image tags in Kustomize manifests
  • Migrations: executed via initContainer npm run migrate and a custom migration runner reading /app/db/migrations/*.sql

1. Trigger: vendor outreach and risk framing

I received a message warning about CVE-2025-55182, described as a critical remote code execution (RCE) issue affecting React Server Components (RSC), with the following claims:

  • Public exploits available and increased threat activity.
  • Remediation guidance: upgrade Next.js to one of the patched versions listed in the message.

My first question was: Is my Blaster deployment affected?

tip

I do not assume. For security notices, I verify the version inside the running pods, not only on my workstation.

2. Phase 1: verify facts local and in-cluster

2.1 Workstation: installed dependency versions

Run from the repo root:

node -p "require('next/package.json').version"
node -p "require('react/package.json').version"
node -p "require('react-server-dom-webpack/package.json').version" 2>/dev/null || true

I observed these versions:

14.2.33
18.3.1
note

If react-server-dom-webpack prints nothing, it is not installed as a direct dependency (it may still exist transitively, depending on the build).

2.2 Kubernetes: verify the version inside the running pods

Dev and prod:

kubectl -n blaster-dev exec -it deploy/blaster-app -- node -p "require('next/package.json').version"
kubectl -n blaster exec -it deploy/blaster-app -- node -p "require('next/package.json').version"

I observed:

14.2.33
14.2.33

2.3 Initial conclusion (risk posture)

Based on what I was actually running at the time of verification:

  • Blaster was running Next.js 14.2.33 and React 18.3.1.
  • The vendor notice explicitly targeted patched versions in Next 15.x and 16.x lines.

Therefore:

  • I did not treat this as an emergency major version upgrade.
  • I moved to dependency hygiene and runtime hardening, so my deployed artefacts match the security posture I am claiming.

3. Phase 2: fix dependency drift and pin versions

3.1 Why pinning mattered here

I noticed my package.json originally had:

  • next: "^14.2.18" which allows upgrades to newer 14.x versions during installs.
  • eslint-config-next: pinned to "14.2.18" which can drift from next.

This creates two problems:

  • Harder to reason about exposure when an advisory lands.
  • Tooling versions can diverge across machines and CI.

3.2 Actions taken: pin exact versions

I pinned Next, eslint-config-next, and React packages.

Commands:

npm install --save-exact next@14.2.33
npm install --save-dev --save-exact eslint-config-next@14.2.33
npm install --save-exact react@18.3.1 react-dom@18.3.1

4. Phase 3: interpret npm audit correctly

devDependencies versus runtime

4.1 What npm audit showed

I saw a high severity advisory (example: glob command injection) in the eslint plugin chain.

Key point:

  • This was in dev tooling.
  • Production dependencies were clean.

I proved it with:

npm audit --omit=dev

I observed:

found 0 vulnerabilities

4.2 What I intentionally did not do (and why)

note

I did NOT run npm audit fix --force

Reason:

  • --force can introduce breaking changes.
  • In my case it would have upgraded eslint tooling across majors (the audit output suggested installing eslint-config-next@16.0.7), which is an unnecessary risk when the issue is dev-only.
note

I did NOT upgrade Next.js to 15 or 16 just because the email mentioned it.

Reason:

  • I verified my runtime version first.
  • A major upgrade should be done deliberately, with regression testing and a planned rollout.

5. Phase 4: CI failure after pinning (lockfile drift)

5.1 The failure

My GitLab job failed at npm ci with:

  • npm ci can only install packages when your package.json and package-lock.json ... are in sync

Examples of mismatches observed:

  • lockfile had dotenv@16.6.1 but package.json required dotenv@16.4.7
  • lockfile had tsx@4.20.6 but package.json required tsx@4.19.2
  • esbuild mismatch cascaded from the tsx dependency chain

This is correct behaviour. npm ci is a strict lockfile install.

5.2 The fix

After changing dependency versions, I regenerated and committed package-lock.json.

Commands:

rm -rf node_modules
npm install

# sanity checks
npm ci
npm test
npm run build

git add package.json package-lock.json
git commit -m "Pin deps; update lockfile"
git push

6. Phase 5: runtime hardening to omit devDependencies

6.1 The real problem

My original Dockerfile:

  • installs devDependencies in build (fine)
  • then copies the entire build stage /app into runtime (not fine)
  • which includes the build stage node_modules, unintentionally shipping devDependencies into production containers

That contradicts the “prod is clean” conclusion, because the runtime container would still carry dev tooling even if npm audit --omit=dev on my workstation is clean.

6.2 Constraint: I run migrations in the initContainer using tsx

My initContainer runs:

command: ["npm", "run", "migrate"]

My script is:

"migrate": "tsx scripts/migrate.ts"

Therefore the runtime image must include:

  • tsx at runtime
  • dotenv at runtime (my migration script loads .env.local by default, though Kubernetes supplies env vars)
  • TS sources: scripts/ and lib/
  • migration SQL files: db/migrations/*.sql

6.3 Updated package.json

Change required:

  • Move dotenv and tsx from devDependencies into dependencies (pin exact versions).
{
"name": "blaster",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev",
"build": "NEXT_TELEMETRY_DISABLED=1 next build",
"start": "next start",
"lint": "next lint",
"test": "npm run lint",
"migrate": "tsx scripts/migrate.ts",
"migrate:status": "tsx scripts/migrate.ts status",
"ci:install": "npm ci --no-audit --no-fund"
},
"dependencies": {
"@clerk/nextjs": "^6.35.2",
"bad-words": "^4.0.0",
"dotenv": "16.4.7",
"next": "14.2.33",
"pg": "^8.13.1",
"phaser": "^3.86.0",
"react": "18.3.1",
"react-dom": "18.3.1",
"swr": "^2.2.5",
"tsx": "4.19.2"
},
"devDependencies": {
"@types/node": "^20",
"@types/pg": "^8.11.10",
"@types/react": "^18",
"@types/react-dom": "^18",
"eslint": "^8.57.0",
"eslint-config-next": "14.2.33",
"postcss": "^8",
"tailwindcss": "^3.4.1",
"typescript": "^5"
}
}

6.4 Updated Dockerfile

Key changes:

  • Runtime stage runs npm ci --omit=dev
  • Runtime stage copies only what it needs
  • next.config.mjs is copied explicitly (Docker COPY does not support shell redirection patterns)
# syntax=docker/dockerfile:1.7

FROM node:22-alpine3.20 AS build
WORKDIR /app

ARG APP_ENV=prod
ENV APP_ENV=${APP_ENV}

ARG NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY
ENV NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=${NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY}

ENV NEXT_TELEMETRY_DISABLED=1

COPY package*.json ./
RUN npm ci --include=dev

COPY . .
RUN npm run build

FROM node:22-alpine3.20 AS runtime
WORKDIR /app

ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1

COPY --from=build /app/package*.json ./
RUN npm ci --omit=dev

COPY --from=build /app/.next ./.next
COPY --from=build /app/public ./public

COPY --from=build /app/db ./db
COPY --from=build /app/lib ./lib
COPY --from=build /app/scripts ./scripts

COPY --from=build /app/next.config.mjs ./next.config.mjs

CMD ["npm", "run", "start"]

6.5 CI output consistency

I already defined:

"ci:install": "npm ci --no-audit --no-fund"

So I can update GitLab CI lint and test jobs to use it for consistent flags:

lint:
script:
- npm run ci:install
- npm run lint

test:
script:
- npm run ci:install
- npm run test

This is optional. It does not change semantics, it just makes CI output quieter and consistent.

7. Phase 6: confirm other repo

7.1 .dockerignore

My .dockerignore excludes node_modules, k8s/, and other non-runtime files, but does not exclude:

  • db/
  • lib/
  • scripts/
  • public/

Therefore the new Dockerfile will still ship migrations and migration runner code.

7.2 next.config.mjs

My next.config.mjs modifies webpack config and should be copied into the runtime image to avoid configuration drift. I copy it explicitly in the Dockerfile runtime stage.

8. Verification: commands and expected results

dev then prod.

8.1 Verify in dev (blaster-dev)

Confirm Next version

kubectl -n blaster-dev exec -it deploy/blaster-app -- sh -lc 'node -p "require(\"next/package.json\").version"'

Expected:

  • 14.2.33

Confirm migrations exist in the container

kubectl -n blaster-dev exec -it deploy/blaster-app -- sh -lc 'ls -la /app/db/migrations | head'

Expected:

  • lists 001_init.sql etc.

Confirm migration runner works and DB is up to date

kubectl -n blaster-dev exec -it deploy/blaster-app -- sh -lc 'npm run migrate:status'

Expected:

  • a list of migrations 001..008 as ✓

Confirm runtime has the migration deps (by design)

kubectl -n blaster-dev exec -it deploy/blaster-app -- sh -lc 'node -p "require(\"tsx/package.json\").version"; node -p "require(\"dotenv/package.json\").version"'

Expected:

  • 4.19.2
  • 16.4.7

Confirm runtime does not include dev tooling

kubectl -n blaster-dev exec -it deploy/blaster-app -- sh -lc 'npm ls eslint eslint-config-next @next/eslint-plugin-next --omit=dev || true'

Expected:

  • (empty) or missing packages

8.2 Verify in prod (blaster)

Confirm initContainer and app container use the same image tag

kubectl -n blaster get deploy blaster-app -o=jsonpath='{.spec.template.spec.initContainers[0].image}{"\n"}{.spec.template.spec.containers[0].image}{"\n"}'

Expected:

  • both lines identical (for example registry.reids.net.au/games/blaster:prod-20251207.181)

Confirm Next version

kubectl -n blaster exec -it deploy/blaster-app -- sh -lc 'node -p "require(\"next/package.json\").version"'

Expected:

  • 14.2.33

Confirm migration deps are pinned as expected

kubectl -n blaster exec -it deploy/blaster-app -- sh -lc 'node -p "require(\"tsx/package.json\").version"; node -p "require(\"dotenv/package.json\").version"'

Expected:

  • 4.19.2
  • 16.4.7

Confirm migration status

kubectl -n blaster exec -it deploy/blaster-app -- sh -lc 'npm run migrate:status'

Expected:

  • a list of migrations 001..008 as ✓ (timestamps will differ from dev)

Confirm runtime does not include dev tooling

kubectl -n blaster exec -it deploy/blaster-app -- sh -lc 'npm ls eslint eslint-config-next @next/eslint-plugin-next --omit=dev || true'

Expected:

  • (empty) or missing packages

9 Verification checklist

  • Next.js version inside dev pods is 14.2.33
  • Next.js version inside prod pods is 14.2.33
  • Runtime has tsx@4.19.2 and dotenv@16.4.7
  • npm run migrate:status succeeds in dev and prod
  • db/migrations exists inside the container at /app/db/migrations
  • Runtime container does not include eslint tooling packages
  • initContainer and app container use the same image tag in prod
  • GitLab CI runs npm ci successfully (package.json and lockfile are in sync)

10. Summary

  • I did not panic-upgrade Blaster to Next 15 or 16.
  • I verified the versions inside the running pods first.
  • I pinned versions and fixed lockfile drift so CI behaves deterministically.
  • I hardened the Docker runtime image so it does not ship devDependencies.
  • I verified dev and prod in-cluster with repeatable kubectl commands.
note

If I later want to remove TypeScript execution from runtime entirely, I will switch migrations to compiled JavaScript and run them with node. That will be a separate and planned improvement.