Velero backup and restore
Compatibility, CLI install, backup schedules, and restore.
Cluster Automated Backup Process
Backup Overview
This section documents how backups work, where they are stored, and when they run.
Backup Types
| Backup Type | Description | Storage Location | Schedule | Timezone |
|---|---|---|---|---|
| Velero Cluster Backup | Complete cluster state, including etcd, persistent volume snapshots, namespaces, etc. | S3 Bucket /velero-backups/k8s-velero | Daily at 1:00 AM AWST (17:00 UTC) | UTC |
| kubectl + Helm Outputs | Text files of kubectl and helm command outputs (resource listings, Helm values). | NFS /k8s/daily-backups | Daily at 2:30 AM AWST (18:30 UTC) | UTC |
| Static Manifests | YAML manifests stored in compressed archives. | NFS /k8s/daily-backups | Daily at 2:00 AM AWST | AWST |
| etcd Snapshots | Consistent etcd database snapshots. | NFS /k8s/daily-backups | Daily at 3:00 AM AWST | AWST |
| Bash History | Compressed bash history files for reference. | NFS /k8s/daily-backups | Daily at 2:00 AM AWST | AWST |
Backup Storage Layout
NFS Share
/k8s/daily-backups/
- Structure:
/k8s/daily-backups/ └── / ├── manifests/ # Compressed YAML manifests ├── etcd-snapshot.db # etcd snapshot file ├── bash_history # Bash history file └── / ├── kubectl-output.txt ├── helm-values-.yaml └── …
S3 Bucket
/velero-backups/k8s-velero/
- Structure:
/velero-backups/k8s-velero/ ├── backups/ ├── daily-backup-/ ├── full-cluster-backup-/ └── kopia/ └── (kopia metadata if used)
Mount NFS Backup Share
- Mount the NFS share so you can access backup files
- First create the directory:
mkdir -p /k8s/daily-backups - Mount the share (this is tempory and does not persist between reboots):
mount -t nfs nas.reids.net.au:/k8s/daily-backups /k8s/daily-backups - Add to fstab to mount the share on server boot:
vi /etc/fstabnas.reids.net.au:/k8s/daily-backups /k8s/daily-backups nfs defaults,_netdev 0 0 - Apply changes:
mount -a - Verify:
ls -la /k8s/daily-backups
ls -lart /k8s/daily-backups/k8s-master-v1
df -h | grep /k8s/daily-backups
ls -1 /k8s/daily-backups/k8s-master-v1/configs/configs-*.tar.gznas.reids.net.au:/k8s/daily-backups 1.4T 16G 1.4T 2% /k8s/daily-backups
/k8s/daily-backups/k8s-master-v1/configs/configs-2025-10-06_02-00-01.tar.gz
Restore Script
The full restore script: restore-k8s-files.sh is shown below:
#!/bin/bash
set -euo pipefail
shopt -s nullglob
BASE_DIR="/k8s/daily-backups"
# --- helpers ---------------------------------------------------------------
# Given: dir, stem (manifests|bash_history|configs), date (yyyy-mm-dd_HH-MM[-SS]), ext (".tar.gz" or ".txt")
# Returns: echo the resolved file path OR empty if not found.
resolve_backup() {
local dir="$1" stem="$2" date_in="$3" ext="$4"
# 1) Try exact date first (works if user typed full yyyy-mm-dd_HH-MM-SS)
local exact="${dir}/${stem}-${date_in}${ext}"
if [[ -f "$exact" ]]; then
echo "$exact"
return 0
fi
# 2) Fuzzy by minute: strip any trailing "-SS" and search that minute
# This also lets the user type only up to minutes (e.g., 2025-10-06_02-00)
local minute_prefix
if [[ "$date_in" =~ ^([0-9]{4}-[0-9]{2}-[0-9]{2}_[0-9]{2}-[0-9]{2}) ]]; then
minute_prefix="${BASH_REMATCH[1]}"
else
# If the input doesn't even include minutes properly, bail.
echo ""
return 0
fi
# Collect all candidates for that minute (00..59)
local candidates=( "${dir}/${stem}-${minute_prefix}-"??"${ext}" )
# If none, return empty
if (( ${#candidates[@]} == 0 )); then
echo ""
return 0
fi
# If exactly one, take it
if (( ${#candidates[@]} == 1 )); then
echo "${candidates[0]}"
return 0
fi
# If multiple (rare), offer a quick chooser
echo
echo "Found multiple ${stem} backups for ${minute_prefix}:"
local i=1
for f in "${candidates[@]}"; do
echo " [$i] $f"
((i++))
done
local choice
while true; do
read -rp "Select 1-${#candidates[@]} (default 1): " choice
if [[ -z "${choice:-}" ]]; then choice=1; fi
if [[ "$choice" =~ ^[0-9]+$ ]] && (( choice>=1 && choice<=${#candidates[@]} )); then
echo "${candidates[choice-1]}"
return 0
fi
echo "Invalid choice."
done
}
# --------------------------------------------------------------------------
echo
echo "🔍 Available node backup directories:"
ls -1 "$BASE_DIR"
echo
read -rp "🗂 Enter the NODE directory you want to restore from (e.g., k8s-master-v1): " NODE_NAME
NODE_DIR="${BASE_DIR}/${NODE_NAME}"
MANIFESTS_DIR="${NODE_DIR}/manifests"
BASH_HISTORY_DIR="${NODE_DIR}/bash_history"
CONFIGS_DIR="${NODE_DIR}/configs"
# Validate selection
if [[ ! -d "$NODE_DIR" ]]; then
echo "❌ Directory does not exist: $NODE_DIR"
exit 1
fi
echo
echo "🔍 Available manifests backups:"
ls -1 "${MANIFESTS_DIR}"/manifests-*.tar.gz 2>/dev/null | tail -n 10
echo
echo "🔍 Available bash_history backups:"
ls -1 "${BASH_HISTORY_DIR}"/bash_history-*.txt 2>/dev/null | tail -n 10
echo
echo "🔍 Available configs backups:"
ls -1 "${CONFIGS_DIR}"/configs-*.tar.gz 2>/dev/null | tail -n 10
echo
read -rp "⚠️ Are you sure you want to proceed with restoring? (y/N) " confirm
if [[ ! "$confirm" =~ ^[Yy]$ ]]; then
echo "❌ Restore aborted."
exit 1
fi
echo
read -rp "🗂 Enter the DATE portion (accepts yyyy-mm-dd_HH-MM or yyyy-mm-dd_HH-MM-SS): " DATE
# --- Manifests -------------------------------------------------------------
echo
echo "📂 Restoring manifests..."
MANIFESTS_ARCHIVE="$(resolve_backup "$MANIFESTS_DIR" "manifests" "$DATE" ".tar.gz")"
if [[ -n "$MANIFESTS_ARCHIVE" && -f "$MANIFESTS_ARCHIVE" ]]; then
# Extract into /root/manifests (strip top folder if present)
mkdir -p /root/manifests
tar -xzf "$MANIFESTS_ARCHIVE" --strip-components=1 -C /root
echo "✅ Manifests restored to /root/manifests"
else
echo "⚠️ No manifests archive matched for: ${DATE}"
fi
# --- Bash history ----------------------------------------------------------
echo
echo "📂 Restoring bash history..."
BASH_HISTORY_FILE="$(resolve_backup "$BASH_HISTORY_DIR" "bash_history" "$DATE" ".txt")"
if [[ -n "$BASH_HISTORY_FILE" && -f "$BASH_HISTORY_FILE" ]]; then
cp "$BASH_HISTORY_FILE" /root/.bash_history_restored
echo "✅ Bash history copied to /root/.bash_history_restored"
echo " To replace your current bash history, run:"
echo " mv /root/.bash_history_restored /root/.bash_history && history -r"
else
echo "⚠️ No bash history file matched for: ${DATE}"
fi
# --- Configs ---------------------------------------------------------------
echo
echo "📂 Restoring configs..."
CONFIGS_ARCHIVE="$(resolve_backup "$CONFIGS_DIR" "configs" "$DATE" ".tar.gz")"
if [[ -n "$CONFIGS_ARCHIVE" && -f "$CONFIGS_ARCHIVE" ]]; then
mkdir -p /root/configs-restore
tar -xzf "$CONFIGS_ARCHIVE" -C /root/configs-restore
echo "✅ Configs extracted to /root/configs-restore"
echo
read -rp "⚠️ Automatically overwrite /usr/local/bin/*.sh with restored versions? (y/N) " overwrite_scripts
if [[ "$overwrite_scripts" =~ ^[Yy]$ ]]; then
if compgen -G "/root/configs-restore/usr/local/bin/*.sh" > /dev/null; then
cp -v /root/configs-restore/usr/local/bin/*.sh /usr/local/bin/
chmod +x /usr/local/bin/*.sh
echo "✅ Scripts restored to /usr/local/bin"
else
echo "ℹ️ No *.sh files found in restored configs."
fi
else
echo "❌ Skipped restoring scripts."
fi
echo
read -rp "⚠️ Automatically overwrite /root/.velero/* with restored credentials? (y/N) " overwrite_velero
if [[ "$overwrite_velero" =~ ^[Yy]$ ]]; then
mkdir -p /root/.velero
if compgen -G "/root/configs-restore/root/.velero/*" > /dev/null; then
cp -v /root/configs-restore/root/.velero/* /root/.velero/ || true
chmod 600 /root/.velero/* || true
echo "✅ Velero credentials restored to /root/.velero"
else
echo "ℹ️ No Velero files found in restored configs."
fi
else
echo "❌ Skipped restoring Velero credentials."
fi
else
echo "⚠️ No configs archive matched for: ${DATE}"
fi
echo
echo "🎉 Restore process completed."
- This script is saved in every daily backup of all master nodes from all clusters
- Extract restore script only:
tar -xzf /k8s/daily-backups/k8s-m-p1/configs/configs-2025-10-06_02-00-01.tar.gz --wildcards 'usr/local/bin/restore-k8s-files.sh' --strip-components=3 -C /usr/local/bin - Set permissions:
chmod +x /usr/local/bin/restore-k8s-files.sh - Run the restore:
/usr/local/bin/restore-k8s-files.sh🔍 Available node backup directories:
2025-10-05_18-32-10
k8s-master-v1
k8s-m-p1
🗂 Enter the NODE directory you want to restore from (e.g., k8s-master-v1): k8s-m-p1
🔍 Available manifests backups:
/k8s/daily-backups/k8s-m-p1/manifests/manifests-2025-10-06_02-00-01.tar.gz
🔍 Available bash_history backups:
/k8s/daily-backups/k8s-m-p1/bash_history/bash_history-2025-10-06_02-00-01.txt
🔍 Available configs backups:
/k8s/daily-backups/k8s-m-p1/configs/configs-2025-10-06_02-00-01.tar.gz
⚠️ Are you sure you want to proceed with restoring? (y/N) y
🗂 Enter the DATE portion (accepts yyyy-mm-dd_HH-MM or yyyy-mm-dd_HH-MM-SS): 2025-10-06_02-00
📂 Restoring manifests...
✅ Manifests restored to /root/manifests
📂 Restoring bash history...
✅ Bash history copied to /root/.bash_history_restored
To replace your current bash history, run:
mv /root/.bash_history_restored /root/.bash_history && history -r
📂 Restoring configs...
✅ Configs extracted to /root/configs-restore
⚠️ Automatically overwrite /usr/local/bin/*.sh with restored versions? (y/N) y
'/root/configs-restore/usr/local/bin/backup-k8s-files.sh' -> '/usr/local/bin/backup-k8s-files.sh'
'/root/configs-restore/usr/local/bin/etcd-backup.sh' -> '/usr/local/bin/etcd-backup.sh'
'/root/configs-restore/usr/local/bin/restore-k8s-files.sh' -> '/usr/local/bin/restore-k8s-files.sh'
✅ Scripts restored to /usr/local/bin
⚠️ Automatically overwrite /root/.velero/* with restored credentials? (y/N) y
'/root/configs-restore/root/.velero/credentials-velero' -> '/root/.velero/credentials-velero'
✅ Velero credentials restored to /root/.velero
Cluster backup process using Velero
GitHub page: Velero
Overview
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a public cloud platform or on-premises.
Velero lets you:
- Take backups of your cluster and restore in case of loss
- Migrate cluster resources to other clusters
- Replicate your production cluster to development and testing clusters
Velero consists of:
- A server that runs on your cluster
- A command-line client that runs locally
Velero compatibility matrix
The following is a list of the supported Kubernetes versions for each Velero version.
| Velero Version | Expected Kubernetes Version Compatibility | Tested on Kubernetes Version |
|---|---|---|
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, 1.33.1, and 1.34.0 |
| 1.16 | 1.18-latest | 1.31.4, 1.32.3, and 1.33.0 |
| 1.15 | 1.18-latest | 1.28.8, 1.29.8, 1.30.4, and 1.31.1 |
| 1.14 | 1.18-latest | 1.27.9, 1.28.9, and 1.29.4 |
| 1.13 | 1.18-latest | 1.26.5, 1.27.3, 1.27.8, and 1.28.3 |
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.26.7, and 1.27.3 |
| 1.11 | 1.18-latest | 1.23.10, 1.24.9, 1.25.5, and 1.26.1 |
Cluster version
When the dev cluster was provisioned using Kubespray, we specified kube_version: 1.32.9, released 9th September 2025.
Although this version hasn't been explicitly tested with Velero, Kubernetes version 1.32.3 has been tested o.k. with Velero version 1.17, so we will install Velero version 1.17.
However, if I was restoring a cluster from a Velero backup then I would ensure that the Kubernetes version and the Velero version matched exactly to avoid any unexpected restore errors. I will detail the restore process in a seperate section.
Velero plugin for AWS
You may ask why do we need an AWS plugin in order to use Velero with our on-premise cluster. It is because the "AWS" plugin is the S3 plugin.
Velero always needs an object storage plugin to write/read backups (the tarred manifests, metadata, Restic/NodeAgent data). That choice depends on the backup bucket API, not where the Kubernetes cluster runs.
- We are using MinIO on-prem. MinIO speaks the S3 API. The Velero plugin that implements S3 is named velero-plugin-for-aws. So even on-prem, we use the “AWS” plugin to talk to any S3-compatible endpoint (MinIO, Ceph RGW, etc).
- The plugin also includes an EBS volume snapshotter, but we will be disabling snapshots with --use-volume-snapshots=false, so we are only using its object store bit.
- If the bucket were Azure Blob or GCS instead, we would use the Azure or GCP plugin respectively. If we later want PV snapshots with the on-prem CSI driver, we would add the Velero CSI plugin alongside the S3 plugin.
GitHub page: Velero Plugins
Compatibility
Below is a listing of plugin versions and respective Velero versions that are compatible.
| Plugin Version | Velero Version |
|---|---|
| v1.13.x | v1.17.x |
| v1.12.x | v1.16.x |
| v1.11.x | v1.15.x |
| v1.10.x | v1.14.x |
| v1.9.x | v1.13.x |
As we have selected Velero version 1.17.0 we will use plugin version 1.13.0.
If I was restoring a cluster from a Velero backup then I would ensure that the Kubernetes version, Velero version and Velero plugin matched exactly to avoid any unexpected restore errors.
Install the Velero CLI on the master node
Install the single binary to /usr/local/bin:
- For AMD64 systems:
export VELERO_VERSION=1.17.0
curl -L -o /tmp/velero.tgz "https://github.com/vmware-tanzu/velero/releases/download/v${VELERO_VERSION}/velero-v${VELERO_VERSION}-linux-amd64.tar.gz"
tar -xzf /tmp/velero.tgz -C /tmp
install /tmp/velero-v${VELERO_VERSION}-linux-amd64/velero /usr/local/bin/velero
velero version --client-only - For ARM systems:
export VELERO_VERSION=1.17.0
curl -L -o /tmp/velero.tgz "https://github.com/vmware-tanzu/velero/releases/download/v${VELERO_VERSION}/velero-v${VELERO_VERSION}-linux-arm64.tar.gz"
tar -xzf /tmp/velero.tgz -C /tmp
install /tmp/velero-v${VELERO_VERSION}-linux-arm64/velero /usr/local/bin/velero
velero version --client-only - As my master node is virtualised on mac I will install the ARM verison:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 52.3M 100 52.3M 0 0 10.6M 0 0:00:04 0:00:04 --:--:-- 12.5M
Client:
Version: v1.17.0
Git commit: 3172d9f99c3d501aad9ddfac8176d783f7692dce - Install Velero, noting the following options:
- bucket:
k8s-velero - prefix:
cluster-dev-reids - s3Url:
http://nas.reids.net.au:8010velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.13.0 \
--bucket k8s-velero \
--prefix cluster-dev-reids
--secret-file ~/.velero/credentials-velero \
--backup-location-config region=minio,s3ForcePathStyle=true,s3Url=http://nas.reids.net.au:8010 \
--use-volume-snapshots=false \
--features=EnableNodeAgent \
--use-node-agent
CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource
CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource client
CustomResourceDefinition/backuprepositories.velero.io: created
CustomResourceDefinition/backups.velero.io: attempting to create resource
CustomResourceDefinition/backups.velero.io: attempting to create resource client
CustomResourceDefinition/backups.velero.io: created
CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource
CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource client
CustomResourceDefinition/backupstoragelocations.velero.io: created
CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource
CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource client
CustomResourceDefinition/deletebackuprequests.velero.io: created
CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource
CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource client
CustomResourceDefinition/downloadrequests.velero.io: created
CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource
CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource client
CustomResourceDefinition/podvolumebackups.velero.io: created
CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource
CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource client
CustomResourceDefinition/podvolumerestores.velero.io: created
CustomResourceDefinition/restores.velero.io: attempting to create resource
CustomResourceDefinition/restores.velero.io: attempting to create resource client
CustomResourceDefinition/restores.velero.io: created
CustomResourceDefinition/schedules.velero.io: attempting to create resource
CustomResourceDefinition/schedules.velero.io: attempting to create resource client
CustomResourceDefinition/schedules.velero.io: created
CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource
CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource client
CustomResourceDefinition/serverstatusrequests.velero.io: created
CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource
CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource client
CustomResourceDefinition/volumesnapshotlocations.velero.io: created
CustomResourceDefinition/datadownloads.velero.io: attempting to create resource
CustomResourceDefinition/datadownloads.velero.io: attempting to create resource client
CustomResourceDefinition/datadownloads.velero.io: created
CustomResourceDefinition/datauploads.velero.io: attempting to create resource
CustomResourceDefinition/datauploads.velero.io: attempting to create resource client
CustomResourceDefinition/datauploads.velero.io: created
Waiting for resources to be ready in cluster...
Namespace/velero: attempting to create resource
Namespace/velero: attempting to create resource client
Namespace/velero: created
ClusterRoleBinding/velero: attempting to create resource
ClusterRoleBinding/velero: attempting to create resource client
ClusterRoleBinding/velero: created
ServiceAccount/velero: attempting to create resource
ServiceAccount/velero: attempting to create resource client
ServiceAccount/velero: created
Secret/cloud-credentials: attempting to create resource
Secret/cloud-credentials: attempting to create resource client
Secret/cloud-credentials: created
BackupStorageLocation/default: attempting to create resource
BackupStorageLocation/default: attempting to create resource client
BackupStorageLocation/default: created
Deployment/velero: attempting to create resource
Deployment/velero: attempting to create resource client
Deployment/velero: created
DaemonSet/node-agent: attempting to create resource
DaemonSet/node-agent: attempting to create resource client
DaemonSet/node-agent: created
Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status. - bucket:
Verify
Steps to verfify Velero is ready and operational:
- Confirm namespace and node-agent:
kubectl get all -n veleroNAME READY STATUS RESTARTS AGE
pod/node-agent-dgcjj 1/1 Running 0 10m
pod/velero-8575f5cd95-lggvw 1/1 Running 0 10m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/node-agent 1 1 1 1 1 kubernetes.io/os=linux 10m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/velero 1/1 1 1 10m
NAME DESIRED CURRENT READY AGE
replicaset.apps/velero-8575f5cd95 1 1 1 10mkubectl -n velero get ds/node-agent -o wideNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
node-agent 1 1 1 1 1 kubernetes.io/os=linux 5m39s node-agent velero/velero:v1.17.0 name=node-agent - Confirm backup storage locations:
kubectl -n velero get backupstoragelocations -o wideNAME PHASE LAST VALIDATED AGE DEFAULT
default Available 54s 12m truekubectl -n velero describe backupstoragelocations defaultName: default
Namespace: velero
Labels: component=velero
Annotations: <none>
API Version: velero.io/v1
Kind: BackupStorageLocation
Metadata:
Creation Timestamp: 2025-10-07T04:47:25Z
Generation: 156
Resource Version: 407436
UID: faf5bbd6-d159-49c2-814f-b23c17a0d8a0
Spec:
Config:
Region: minio
s3ForcePathStyle: true
s3Url: http://nas.reids.net.au:8010
Default: true
Object Storage:
Bucket: k8s-velero
Prefix: cluster-dev-reids
Provider: aws
Status:
Last Synced Time: 2025-10-07T06:08:51Z
Last Validation Time: 2025-10-07T06:08:01Z
Phase: Available
Events: <none>
root@dev-m-v1:~#kubectl -n velero describe backupstoragelocations default | sed -n '1,120p' | grep -A2 -i 'Object Storage'Object Storage:
Bucket: k8s-velero
Prefix: cluster-dev-reids
Run a small smoke test to confirm end-to-end writes:
- Create the smoketest:
velero backup create smoke --include-namespaces velero --ttl 10mBackup request "smoke" submitted successfully.
Run `velero backup describe smoke` or `velero backup logs smoke` for more details. - Describe the smoketest:
velero backup describe smoke --detailsName: smoke
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/resource-timeout=10m0s
velero.io/source-cluster-k8s-gitversion=v1.32.9
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=32
Phase: Completed
Namespaces:
Included: velero
Excluded: <none>
Resources:
Included cluster-scoped: <none>
Excluded cluster-scoped: volumesnapshotcontents.snapshot.storage.k8s.io
Included namespace-scoped: *
Excluded namespace-scoped: volumesnapshots.snapshot.storage.k8s.io
Label selector: <none>
Or label selector: <none>
Storage Location: default
Velero-Native Snapshot PVs: auto
File System Backup (Default): false
Snapshot Move Data: false
Data Mover: velero
TTL: 10m0s
CSISnapshotTimeout: 10m0s
ItemOperationTimeout: 4h0m0s
Hooks: <none>
Backup Format Version: 1.1.0
Started: 2025-10-07 14:13:51 +0800 AWST
Completed: 2025-10-07 14:13:53 +0800 AWST
Expiration: 2025-10-07 14:23:51 +0800 AWST
Total items to be backed up: 18
Items backed up: 18
Resource List:
apiextensions.k8s.io/v1/CustomResourceDefinition:
- backups.velero.io
- backupstoragelocations.velero.io
apps/v1/ControllerRevision:
- velero/node-agent-7974756db
apps/v1/DaemonSet:
- velero/node-agent
apps/v1/Deployment:
- velero/velero
apps/v1/ReplicaSet:
- velero/velero-8575f5cd95
rbac.authorization.k8s.io/v1/ClusterRole:
- cluster-admin
rbac.authorization.k8s.io/v1/ClusterRoleBinding:
- velero
v1/ConfigMap:
- velero/kube-root-ca.crt
v1/Namespace:
- velero
v1/Pod:
- velero/node-agent-dgcjj
- velero/velero-8575f5cd95-lggvw
v1/Secret:
- velero/cloud-credentials
- velero/velero-repo-credentials
v1/ServiceAccount:
- velero/default
- velero/velero
velero.io/v1/Backup:
- velero/smoke
velero.io/v1/BackupStorageLocation:
- velero/default
Backup Volumes:
Velero-Native Snapshots: <none included>
CSI Snapshots: <none included>
Pod Volume Backups: <none included>
HooksAttempted: 0
HooksFailed: 0 - View logs:
velero backup logs smoketime="2025-10-07T06:13:53Z" level=info msg="Executing RemapCRDVersionAction" backup=velero/smoke cmd=/velero logSource="pkg/backup/actions/remap_crd_version_action.go:61" pluginName=velero
time="2025-10-07T06:13:53Z" level=info msg="Exiting RemapCRDVersionAction, the cluster does not support v1beta1 CRD" backup=velero/smoke cmd=/velero logSource="pkg/backup/actions/remap_crd_version_action.go:88" pluginName=velero
time="2025-10-07T06:13:53Z" level=info msg="Waiting for completion of PVB" backup=velero/smoke logSource="pkg/podvolume/backupper.go:403"
time="2025-10-07T06:13:53Z" level=info msg="Summary for skipped PVs: []" backup=velero/smoke logSource="pkg/backup/backup.go:614"
time="2025-10-07T06:13:53Z" level=info msg="Backed up a total of 18 items" backup=velero/smoke logSource="pkg/backup/backup.go:618" progress=
Velero backup list
- Get all available Velero backups:
velero backup getNAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
smoke Completed 0 0 2025-10-07 14:13:51 +0800 AWST 8m default <none>

Schedule Velero backups
-
Get the cluster UID to add as a label to the backup
CLUSTER_UID=$(kubectl get ns kube-system -o jsonpath='{.metadata.uid}')
echo $CLUSTER_UIDa2f25f81-47fa-484f-be57-9959aa7f9c03 -
Create the backup
- Note that Velero’s schedules always interpret cron times in UTC, regardless of your system or cluster timezone. There’s currently no flag or configuration to switch the schedule to local time
- As I want the backup to run at 01:00 AWST, I will start it at 17:00 UTC
velero schedule create daily-backup \
--schedule "0 17 * * *" \
--include-namespaces "*" \
--snapshot-volumes \
--ttl 720h \
--labels "cluster=dev-reids,cluster_uid=${CLUSTER_UID}"Schedule "daily-backup" created successfully. -
Check backup schedules:
velero get schedulesNAME STATUS CREATED SCHEDULE BACKUP TTL LAST BACKUP SELECTOR PAUSED
daily-backup Enabled 2025-10-07 17:07:06 +0800 AWST 0 17 * * * 720h0m0s n/a <none> false -
Delete a schedule
- Note that deleting the schedule does not delete the backups it already created
velero schedule delete daily-backup --confirm- To see them:
velero backup get -l velero.io/schedule-name=daily-backup- To remove those as well:
velero backup delete --selector velero.io/schedule-name=daily-backup --confirm