Blaster repo preparation before Kubernetes deployment
Blaster GitOps series
- Blaster GitOps summary
- Blaster repo and branches
- Dockerfile & GitLab CI
- Clerk authentication & user setup
- Google OAuth for Clerk
- Blaster prep for automation - you are here
- Dev app k8s manifests
- Dev flux sources & Kustomizations
- Dev image automation
- Dev SOPS & age
- Dev verification & troubleshooting
- Dev full runbook
- Prod overview
- Prod app k8s manifests and deployment
- Prod Flux GitOps and image automation
- Prod Cloudflare, Origin CA and tunnel routing
- Prod full runbook
- Post development branches
1. What had to be prepared before Kubernetes
Before Blaster could run on the cluster, the repo needed two key things in place:
- A repeatable database migration system so that PostgreSQL could be initialised from Git-controlled SQL files, both locally and in Kubernetes.
- A .dockerignore file so Docker images did not accidentally include
.git,.env.localor other sensitive or useless files.
This page documents the configuration of those changes and shows how they were verified in Kubernetes.
2. Database initialisation and verification
2.1 Verifying database name, user and password from Kubernetes
The first step was to prove that the Kubernetes Secret and the running app agreed on database connection details.
andy@Andrews-Mac-Studio-2 ~ % kubectl -n blaster get secret blaster-db-secret -o jsonpath='{.data.POSTGRES_DB}' | base64 -d; echo
kubectl -n blaster get secret blaster-db-secret -o jsonpath='{.data.POSTGRES_USER}' | base64 -d; echo
blaster_game
blaster_user
Connect to the PostgreSQL StatefulSet and inspect databases, roles and schemas:
andy@Andrews-Mac-Studio-2 ~ % kubectl -n blaster exec -it statefulset/blaster-db -- bash
root@blaster-db-0:/# psql -U blaster_user -d blaster_game
psql (15.8 (Debian 15.8-1.pgdg120+1))
Type "help" for help.
blaster_game=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | ICU Locale | Locale Provider | Access privileges
--------------+--------------+----------+------------+------------+------------+-----------------+-------------------------------
blaster_game | blaster_user | UTF8 | en_US.utf8 | en_US.utf8 | | libc |
postgres | blaster_user | UTF8 | en_US.utf8 | en_US.utf8 | | libc |
template0 | blaster_user | UTF8 | en_US.utf8 | en_US.utf8 | | libc | =c/blaster_user +
| | | | | | | blaster_user=CTc/blaster_user
template1 | blaster_user | UTF8 | en_US.utf8 | en_US.utf8 | | libc | =c/blaster_user +
| | | | | | | blaster_user=CTc/blaster_user
(4 rows)
blaster_game=# \du
List of roles
Role name | Attributes | Member of
--------------+------------------------------------------------------------+-----------
blaster_user | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
blaster_game=# \dn
List of schemas
Name | Owner
--------+-------------------
public | pg_database_owner
(1 row)
blaster_game=# \dt
Did not find any relations.
blaster_game=# \q
root@blaster-db-0:/# kubectl -n blaster exec deploy/blaster-app -c blaster -- env | grep -i 'POSTGRES\|^C
root@blaster-db-0:/# exit
exit
command terminated with exit code 127
Then confirm that the app Deployment sees the same values via environment variables and the Secret:
andy@Andrews-Mac-Studio-2 ~ % kubectl -n blaster exec deploy/blaster-app -c blaster -- env | grep -i 'POSTGRES\|DATABASE'
POSTGRES_HOST=blaster-db
POSTGRES_PORT=5432
POSTGRES_USER=blaster_user
POSTGRES_DB=blaster_game
REDACTED_PASSWPORD
BLASTER_DB_SERVICE_PORT_POSTGRES=5432
andy@Andrews-Mac-Studio-2 ~ % kubectl -n blaster get secret blaster-db-secret -o jsonpath='{.data.POSTGRES_PASSWORD}' | base64 -d; echo
REDACTED_PASSWORD
andy@Andrews-Mac-Studio-2 ~ % kubectl -n blaster exec deploy/blaster-app -c blaster -- printenv POSTGRES_PASSWORD
REDACTED_PASSWORD
The goal of this section is to prove that:
- PostgreSQL is running with the expected database and role.
- The app sees the same credentials via Kubernetes
Secretandenv.
2.2 Database initialisation script: 001_init.sql
The schema is initialised using a series of ordered SQL files. The first migration creates the core game tables and indexes.
001_init.sql:
-- Initial schema for Blaster game
-- Creates profiles and high_scores tables
-- Profiles table: stores user information linked to Clerk authentication
CREATE TABLE IF NOT EXISTS profiles (
id SERIAL PRIMARY KEY,
clerk_user_id VARCHAR(255) UNIQUE NOT NULL,
username VARCHAR(255),
tier VARCHAR(50) DEFAULT 'free',
credits INTEGER DEFAULT 10,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
-- High scores table: stores game scores for the leaderboard
CREATE TABLE IF NOT EXISTS high_scores (
id SERIAL PRIMARY KEY,
player_name VARCHAR(255) NOT NULL,
score INTEGER NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
-- Create indexes for better query performance
CREATE INDEX IF NOT EXISTS idx_profiles_clerk_user_id ON profiles(clerk_user_id);
CREATE INDEX IF NOT EXISTS idx_high_scores_score ON high_scores(score DESC);
-- Add a comment explaining the schema
COMMENT ON TABLE profiles IS 'User profiles linked to Clerk authentication';
COMMENT ON TABLE high_scores IS 'Game high scores for the leaderboard';
COMMENT ON COLUMN profiles.clerk_user_id IS 'Unique identifier from Clerk authentication';
COMMENT ON COLUMN profiles.tier IS 'User tier: free, premium, etc.';
COMMENT ON COLUMN profiles.credits IS 'Game credits available to the user';
Key points:
- Tables are created with
IF NOT EXISTSso the migration is idempotent. - The schema supports both profiles and the leaderboard from the start.
- Indexes and comments are part of the migration so they are tracked under version control.
2.3 Migration CLI and scripts in package.json
Migrations are applied using a small CLI tool and npm scripts so that:
- Local development, CI and Kubernetes all use the same entry point.
- The Kubernetes
initContainercan safely run migrations on every deploy.
package.json scripts:
{
"name": "blaster",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start",
"lint": "next lint",
"test": "npm run lint",
"migrate": "tsx scripts/migrate.ts",
"migrate:status": "tsx scripts/migrate.ts status",
"ci:install": "npm ci --no-audit --no-fund"
},
}
Migration CLI: scripts/migrate.ts:
#!/usr/bin/env node
/**
* Migration CLI script
*
* Usage:
* npm run migrate # Run all pending migrations
* npm run migrate status # Show migration status
*
* Exit codes:
* 0 - Success
* 1 - Migration failed or error occurred
*/
// Load environment variables from .env.local (for local development)
// In production/Kubernetes, env vars are provided by the platform
import dotenv from 'dotenv';
import path from 'path';
// Load .env.local if it exists (Next.js convention)
dotenv.config({ path: path.join(process.cwd(), '.env.local') });
import { runMigrations, getMigrationStatus } from '../lib/db/migrate';
import { closePgPool } from '../lib/db/pool';
async function main() {
const command = process.argv[2];
try {
if (command === 'status') {
// Show migration status
await getMigrationStatus();
} else {
// Run migrations
const applied = await runMigrations();
if (applied > 0) {
console.log(`\n✅ Migration complete! Applied ${applied} migration(s).\n`);
} else {
console.log('\n✅ Database is up to date.\n');
}
}
// Clean exit
await closePgPool();
process.exit(0);
} catch (error) {
console.error('\n❌ Migration failed!\n');
console.error(error);
// Clean up and exit with error
await closePgPool();
process.exit(1);
}
}
main();
This script was written so that:
- Local dev can run
npm run migratebefore starting the app. - CI can run migrations against a temporary database if required.
- Kubernetes can run migrations safely inside an
initContainerusing the same code path.
3. Docker build hygiene with .dockerignore
3.1 Why .dockerignore was required
The original Docker build used a COPY . . pattern, which would copy everything in the repo into the image. Without a .dockerignore, that includes:
.git/directory and history..env.localand other environment files.- Development scripts and internal notes.
- Kubernetes and CI configuration not needed in the container.
3.2 .dockerignore
Introduced a .dockerignore file that explicitly excluded sensitive and unnecessary content.
# Version Control
.git
.gitignore
.gitattributes
# Environment Files
.env
.env.local
.env*.local
# OS Files
.DS_Store
Thumbs.db
# Dependencies
node_modules
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# IDE
.vscode
.idea
*.swp
*.swo
*~
# Documentation (not needed in production)
markdowns/
README.md
# Development Scripts
fr.sh
flattened_repo.txt
repo_structure.yaml
# Kubernetes Config (not needed in container)
k8s/
# CI/CD
.gitlab-ci.yml
# Build Files
Dockerfile
.dockerignore
# Database
database/schema.sql
# Testing
.eslintrc.json
.eslintignore
# Logs
*.log
# Temporary Files
*.tmp
*.temp
.cache
Verification step to confirm nothing sensitive is baked into the image:
# Build image and check what's inside
docker build -t blaster:test .
docker run --rm blaster:test ls -la | grep "\.git\|\.env"
# Should return nothing
This ensures that:
- Only application code and runtime assets make it into the container image.
- Secrets stay in Kubernetes and local files, not inside the built image.
- The build context is smaller, so images build faster and push more quickly.
4. Deployment and migration flow in Kubernetes
4.1 Environment variables in the app Deployment
The Kubernetes app Deployment is configured so that the app can reach PostgreSQL using the same POSTGRES_* environment variables verified earlier.
env:
- name: POSTGRES_HOST
value: "blaster-db"
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_USER
value: "blaster_user"
- name: POSTGRES_DB
value: "blaster_game"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: blaster-db-secret
key: password
4.2 Migration initContainer
To make sure schema changes are always applied before the app starts, the Deployment uses an initContainer which runs the migration CLI.
initContainers:
- name: migrate
image: your-registry/blaster:latest
command: ["npm", "run", "migrate"]
env:
# Same POSTGRES_* env vars as main container
How it works:
initContainerrunsnpm run migrateinside the pod.- Migrations apply in order from the SQL files.
- If migrations fail, the pod never reaches
Readyand does not serve traffic. - Once migrations succeed, the main app container starts and uses the same database via the shared
POSTGRES_*variables.
4.3 Flux and GitOps behaviour
Because migrations and .dockerignore live in Git:
- A developer adds or edits a migration file such as
001_init.sqlor003_add_feature.sql. - Changes are committed and pushed to GitLab.
- CI builds a new Docker image and tags it
prod-YYYYMMDD.BUILD. - Flux detects the new image tag and updates the Kubernetes Deployment.
- On rollout, the
initContainerrunsnpm run migrateusing the new image. - Schema changes are applied before traffic hits the new pods.
4.4 Optional self-hosted deployment
The same migration pipeline can run outside Kubernetes:
- Build the project:
npm run build - Set up PostgreSQL on your server.
- Configure environment variables (matching the
POSTGRES_*settings). - Run migrations:
npm run migrate - Start the server:
npm run start
5. Verification checklist
Use this checklist before relying on the Blaster repo for automated deployments:
-
npm run migratesucceeds against a local PostgreSQL instance. -
001_init.sqlcreates the expected tables and indexes. -
kubectl -n blaster get secret blaster-db-secretshows the expectedPOSTGRES_*values. -
kubectl -n blaster exec statefulset/blaster-dbconfirms the database and role exist. -
kubectl -n blaster exec deploy/blaster-appshows matchingPOSTGRES_*env vars. -
.dockerignoreexcludes.git,.env*,k8s/, CI config and local scripts. - A test image built with
docker builddoes not contain.gitor.env.localwhen inspected.