Hybrid cluster
Replace the virtual worker with a physical node to make the development cluster hybrid. This adds realistic CPU, memory, storage, and network performance while keeping the control plane simple and virtualised.
When to do this
- The virtual worker is resource-constrained and new pods fail to schedule.
- You want realistic IO and network behaviour for databases or heavier services.
- You need to validate drivers, NICs, and storage on real hardware.
Prerequisites
- A physical machine on your LAN running Ubuntu Server LTS (matching your dev cluster minor version, for example 1.32.x).
- DNS entries and DHCP reservations prepared.
- SSH access from the
kubespraybastion VM. - Awareness that Apple Silicon VMs are ARM64 and a physical host may be x86_64. Ensure images and add-ons are multi-arch.
Prepare installation media
- Download the correct Ubuntu Server ISO for your architecture.
- Most desktops and servers use
amd64. - The Ubuntu tutorial.
- Most desktops and servers use
- Create a bootable USB drive using a tool like
balenaEtcheron macOS, or mount the ISO via your server’s lights-out management.
Install Ubuntu Server on the physical host
- Follow the Ubuntu tutorial through disk selection and partitioning.
- Enable OpenSSH server during setup.
- After reboot, confirm SSH works from your workstation.
Initial system setup (SSH session)
- Log in and escalate privileges with
sudo -i. - Update the system and install helpful tools.
- Set a static IP with Netplan and apply it.
- Add a DNS record for the host and reconnect via its name.
- Configure DNS (systemd-resolved) and NTP (ntpsec) to point to your local services.
- Set timezone to
Australia/Perth. - Create a non-root admin user for yourself and add to
sudo. - Disable IPv6 if you prefer a v4-only lab.
- Disable cloud-init on this node.
- Install
nfs-commonif you will mount NAS-backed volumes. - Reboot to confirm all changes.
Warning: After setting a static IP, your SSH session will drop. Reconnect using the new address or DNS name.
Join the cluster with Kubespray
Inventory
- Add the physical host under
[kube_node]in your Kubespray inventory. - Refresh facts for all hosts with
playbooks/facts.yml.
Scale up
- Run
scale.ymllimited to the new node to join it to the cluster.
Migrate workloads
- Cordon the virtual worker:
kubectl cordon worker-1. - Drain it safely:
kubectl drain worker-1 --ignore-daemonsets --delete-emptydir-data. - Use node selectors or taints so ingress and test workloads prefer the physical worker.
Optional: remove the virtual worker
- Delete the node from Kubernetes and inventory when satisfied.
- Optionally run Kubespray’s
reset.ymlagainst the removed node if you want to wipe it clean.
Danger: The reset.yml playbook wipes kubelet, container runtime, CNI, and related state. Limit it carefully to the target node and confirm the prompt.
Verify
- The physical worker shows
Readyand carries workloads. - Ingress and test apps route correctly via the load balancer VIPs.
- Backup and a short restore still complete successfully.
Rollback
- Uncordon the virtual worker to resume scheduling there.
- Remove the physical node from the cluster and inventory.
- Re-run
scale.ymlwithout the physical node if needed.
Notes and architecture considerations
- CPU architecture. Mixed ARM64 and x86_64 is fine if images are multi-arch. Pin image digests in manifests where possible.
- Ingress. Prefer to schedule ingress on the physical worker for throughput tests.
- Storage. Local SSD on the physical node gives realistic IO for databases and log-heavy services.
What “good” looks like
- The physical worker carries real workloads reliably.
- No drift in networking, VIPs, or certificate handling.
- Restore rehearsals still pass with documented timings and follow-ups.