Skip to main content

Archived vagrant runbook

Vagrant, kubeadm, docker and flannel runbook (archived).

warning

Archived on: 2 November 2025. Risk: high.
Superseded by: /kubernetes/provisioning.
Notes: Replaced by Kubespray-based builds on Parallels VMs. Vagrant adds complexity on Apple Silicon, and flannel is no longer the default CNI in the Sphere. Keep for lab reference only.

Last known working versions
  • Kubernetes: 1.28.1.
  • OS: Ubuntu 22.04.2 LTS.
  • Container runtime: containerd 1.6.22.
  • CNI: flannel (kube-flannel.yml).
  • Dashboard: 2.7.0.

Deploy

Deploy and provision three Ubuntu VM’s with Vagrant and Parallels on a MacBook Pro M1/M2.

Useful commands

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install hashicorp/tap/hashicorp-vagrant
vagrant plugin install vagrant-parallels
vagrant up --provider=parallels
vagrant status
vagrant ssh vm01
vagrant ssh vm02
vagrant ssh vm03
sudo swapoff -a
kubeadm init --apiserver-advertise-address=10.211.55.5 --control-plane-endpoint=10.211.55.5 --pod-network-cidr=10.244.0.0/16
kubectl get nodes -o wide
kubectl get pods --all-namespaces
kubectl get pod -A
kubectl describe nodes
kubectl label node vm02 kubernetes.io/role=worker1
kubectl label node vm03 kubernetes.io/role=worker2
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
kubectl proxy &
ssh -A vagrant@10.211.55.5 -L 8001:127.0.0.1:8001
kubectl apply -f admin-dashboard.yaml
kubectl -n kubernetes-dashboard create serviceaccount admin-user
kubectl -n kubernetes-dashboard create token admin-user
sudo journalctl -xeu kubelet
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart kubelet
restart

Install tools

Install homebrew

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Add homebrew to path

(echo; echo 'eval "$(/opt/homebrew/bin/brew shellenv)"') >> /Users/andy/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

Install vagrant

brew install hashicorp/tap/hashicorp-vagrant

Install vagrant-parallels plugin

Assuming you are using parallels.

vagrant plugin install vagrant-parallels

Clone repo

https://github.com/emmaliaocode/vagrant-vmware-macos-arm.git

Download the Vagrantfile and scripts into a new folder called vagrant_project.

mkdir vagrant_project
cd vagrant_project

Modify Vagrantfile

Change the NUM_NODE to the number of nodes you require.

vi Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Define the number of nodes
NUM_NODE = 3

Vagrant.configure("2") do |config|
config.vm.box = "bento/ubuntu-22.04-arm64"
config.vm.box_check_update = false
(1..NUM_NODE).each do |i|
config.vm.define "vm0#{i}" do |node|
node.vm.hostname = "vm0#{i}"
node.vm.network "forwarded_port", guest: 22, host: "#{2720 + i}"
node.vm.provision "setup-hosts", :type => "shell", :path => "scripts/setup-hosts.sh" do |s|
s.args = ["eth0"]
end
end
end
end

Enable vagrant

vagrant up --provider=parallels

vagrant status

# Current machine states:
# vm01 running (parallels)
# vm02 running (parallels)
# vm03 running (parallels)

Cloned VMs

Three cloned Ubuntu VMs running.

vagrant ssh vm01
vagrant ssh vm02
vagrant ssh vm03

Parallels network setup

Note that the IP addresses assigned to each VM are derived from the DHCP pool assigned in the Parallels desktop preferences under network.

I have configured my shared network as follows:

  • Connect Mac to this network - ticked
  • Enable IPv4 DHCP - ticked
  • Start address: 10.211.55.1
  • End address: 10.211.55.254
  • Subnet mask: 255.255.255.0

Vagrant collect host script

The scripts within the script folder get the IP addresses as assigned and add DNS and names.

vi collect-host-ip.sh

#!/bin/bash

for NODE in $(ls -l ./.vagrant/machines | awk '{print $9}');
do
echo "collecting $NODE ip address..."
NODE_IP=$(vagrant ssh $NODE -c "cat /etc/hosts | tail -1 | cut -d' ' -f1")
echo $NODE_IP $NODE | tr -d "\r" >> ./host-ip
done

echo ""
echo "ip addresses:"
cat ./host-ip
rm ./host-ip%

Vagrant setup hosts script

vi setup-hosts.sh

#!/bin/bash

# setup hosts
set -e
IFNAME=$1
ADDRESS="$(ip -4 addr show $IFNAME | grep "inet" | head -1 | awk '{print $2}' | cut -d/ -f1)"
sed -e "s/^.*${HOSTNAME}.*/${ADDRESS} ${HOSTNAME} ${HOSTNAME}.local/" -i /etc/hosts

# update dns
sed -i -e 's/#DNS=/DNS=8.8.8.8/' /etc/systemd/resolved.conf
service systemd-resolved restart%

Prerequisites

Disable swap

On all VM’s must turn off swap.

sudo swapoff -a

Configure DNS resolution of VMs

vi /etc/hosts

10.211.55.5 vm01 vm01.local
10.211.55.6 vm02 vm02.local
10.211.55.7 vm03 vm03.local

Bootstrap using Kubeadm

Bootstrap a three-node Kubernetes cluster using Kubeadm.

  • vm01 (Master Node) will run the following services: Kube-apiserver, etc, Scheduler, Controller-manager
  • vm02 and vm03 (Worker Nodes) will run Kubelet and Kube-proxy.

Repeat all steps on each VM

Ensure the following steps are set on each node using root privilege: sudo -i

Install container runtime

Enable IPv4 forwarding and let iptables see bridged traffic.

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

Apply sysctl params without reboot

sysctl --system

Install Docker Engine on Ubuntu

Setup the repository.

sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

Install Docker Engine

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo docker run hello-world

Enable CRI plugin within containerd

sed -i 's/disabled_plugins = \[\"cri\"\]/\#disabled_plugins \= \[\"cri\"\]/g'  /etc/containerd/config.toml
systemctl restart containerd

Install Kubeadm, Kubelet and Kubectl

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Configure runtime endpoint

crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

Only on VM01 - the master node

Initiate the control plane

kubeadm init --apiserver-advertise-address=10.211.55.5 --control-plane-endpoint=10.211.55.5 --pod-network-cidr=10.244.0.0/16
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join 10.211.55.5:6443 --token ob8r6s.khcrybkw6fw3rqav \
--discovery-token-ca-cert-hash sha256:41b2eeca0963d828fc5df1c5377c548dee05ddd7e7c7709c51e1e0fe50914b0c \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.211.55.5:6443 --token ob8r6s.khcrybkw6fw3rqav \
--discovery-token-ca-cert-hash sha256:41b2eeca0963d828fc5df1c5377c548dee05ddd7e7c7709c51e1e0fe50914b0c

Add .kube config

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

Initial pod status

root@vm01:~# kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5dd5756b68-7lt52 0/1 Pending 0 2m36s
kube-system coredns-5dd5756b68-hlqcn 0/1 Pending 0 2m36s
kube-system kube-apiserver-vm01 1/1 Running 1 (2m54s ago) 2m52s
kube-system kube-proxy-5tgls 1/1 Running 1 (2m21s ago) 2m36s
root@vm01:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
vm01 NotReady control-plane 4m v1.28.1 10.211.55.5 <none> Ubuntu 22.04.2 LTS 5.15.0-76-generic containerd://1.6.22

Install network add-on flannel

This allows the pods to communicate with each other.

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Modify on all three nodes

vi /etc/crictl.yaml
runtime-endpoint: "unix:///var/run/containerd/containerd.sock"
image-endpoint: "unix:///var/run/containerd/containerd.sock"
timeout: 10
debug: false
pull-image-on-create: false
disable-pull-on-run: false

On VM01

sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart kubelet
root@vm01:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
vm01 Ready control-plane 23m v1.28.1 10.211.55.5 <none> Ubuntu 22.04.2 LTS 5.15.0-76-generic containerd://1.6.22
sudo journalctl -xeu kubelet

On ALL VMs

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
sed -i 's/disabled_plugins = \[\"cri\"\]/\#disabled_plugins \= \[\"cri\"\]/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
reboot
root@vm01:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-586l2 1/1 Running 16 (9m56s ago) 61m
kube-system coredns-5dd5756b68-7lt52 1/1 Running 11 (5m40s ago) 70m
kube-system coredns-5dd5756b68-hlqcn 1/1 Running 12 (5m40s ago) 70m
kube-system etcd-vm01 1/1 Running 21 (5m40s ago) 67m
kube-system kube-apiserver-vm01 1/1 Running 20 (8m51s ago) 70m
kube-system kube-controller-manager-vm01 1/1 Running 21 (5m40s ago) 67m
kube-system kube-proxy-5tgls 1/1 Running 21 (5m40s ago) 70m
kube-system kube-scheduler-vm01 1/1 Running 21 (5m40s ago) 67m

Connect worker nodes to cluster

Ensure vm01 is stable before proceeding to connect worker nodes.

Leave for 10 minutes and check status of pod. If a restart counter is increasing (and time is short), status is in error or not ready then troubleshoot.

Run on VM02 and VM03

kubeadm join 10.211.55.5:6443 --token ob8r6s.khcrybkw6fw3rqav \
--discovery-token-ca-cert-hash sha256:41b2eeca0963d828fc5df1c5377c548dee05ddd7e7c7709c51e1e0fe50914b0c

Run on VM01

root@vm01:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-586l2 1/1 Running 16 (13m ago) 64m
kube-flannel kube-flannel-ds-7spcx 1/1 Running 0 68s
kube-flannel kube-flannel-ds-nnd57 1/1 Running 0 61s
kube-system coredns-5dd5756b68-7lt52 1/1 Running 11 (8m50s ago) 73m
kube-system coredns-5dd5756b68-hlqcn 1/1 Running 12 (8m50s ago) 73m
kube-system etcd-vm01 1/1 Running 21 (8m50s ago) 71m
kube-system kube-apiserver-vm01 1/1 Running 20 (12m ago) 73m
kube-system kube-controller-manager-vm01 1/1 Running 21 (8m50s ago) 71m
kube-system kube-proxy-5tgls 1/1 Running 21 (8m50s ago) 73m
kube-system kube-proxy-lfr79 1/1 Running 0 61s
kube-system kube-proxy-ljxc2 1/1 Running 0 68s
kube-system kube-scheduler-vm01 1/1 Running 21 (8m50s ago) 71m
root@vm01:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
vm01 Ready control-plane 74m v1.28.1 10.211.55.5 <none> Ubuntu 22.04.2 LTS 5.15.0-76-generic containerd://1.6.22
vm02 Ready <none> 74s v1.28.1 10.211.55.6 <none> Ubuntu 22.04.2 LTS 5.15.0-76-generic containerd://1.6.22
vm03 Ready <none> 67s v1.28.1 10.211.55.7 <none> Ubuntu 22.04.2 LTS 5.15.0-76-generic containerd://1.6.22

Rename worker nodes

kubectl label node vm02 kubernetes.io/role=worker1
kubectl label node vm03 kubernetes.io/role=worker2
root@vm01:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
vm01 Ready control-plane 76m v1.28.1 10.211.55.5 <none> Ubuntu 22.04.2 LTS 5.15.0-76-generic containerd://1.6.22
vm02 Ready worker1 3m27s v1.28.1 10.211.55.6 <none> Ubuntu 22.04.2 LTS 5.15.0-76-generic containerd://1.6.22
vm03 Ready worker2 3m20s v1.28.1 10.211.55.7 <none> Ubuntu 22.04.2 LTS 5.15.0-76-generic containerd://1.6.22

Deploy kubernetes dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
root@vm01:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-586l2 1/1 Running 16 (16m ago) 68m
kube-flannel kube-flannel-ds-7spcx 1/1 Running 0 4m50s
kube-flannel kube-flannel-ds-nnd57 1/1 Running 0 4m43s
kube-system coredns-5dd5756b68-7lt52 1/1 Running 11 (12m ago) 77m
kube-system coredns-5dd5756b68-hlqcn 1/1 Running 12 (12m ago) 77m
kube-system etcd-vm01 1/1 Running 21 (12m ago) 74m
kube-system kube-apiserver-vm01 1/1 Running 20 (15m ago) 77m
kube-system kube-controller-manager-vm01 1/1 Running 21 (12m ago) 74m
kube-system kube-proxy-5tgls 1/1 Running 21 (12m ago) 77m
kube-system kube-proxy-lfr79 1/1 Running 0 4m43s
kube-system kube-proxy-ljxc2 1/1 Running 0 4m50s
kube-system kube-scheduler-vm01 1/1 Running 21 (12m ago) 74m
kubernetes-dashboard dashboard-metrics-scraper-5657497c4c-qmvc6 1/1 Running 0 26s
kubernetes-dashboard kubernetes-dashboard-78f87ddfc-wlwnh 1/1 Running 0 26s

From VM01 start the proxy service

kubectl proxy &

Start tunnel

From a new terminal session open a tunnel to the master server to expose the port 8001.

ssh -A vagrant@10.211.55.5 -L 8001:127.0.0.1:8001
password: vagrant
sudo -i
cd /etc/kubernetes/manifests

Create the admin dashboard file

vi admin-dashboard.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
apiVersion: v1
kind: Secret
metadata:
name: admin-user
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: "admin-user"
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

Apply

kubectl apply -f admin-dashboard.yaml
kubectl -n kubernetes-dashboard create serviceaccount admin-user

Create token

note

Tokens for security purposes have an expiry and need to be regenerated - this token will not work on your system, it is shown as an example only.

kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6IlFGZTdtSTg4QV9yb2Y0aWdKeDRLRTNCQXFBNk9mbXpCNktEZHQ2WHZxYXMifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjkzMjc1MjI2LCJpYXQiOjE2OTMyNzE2MjYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiM2NhNTY2ZDgtNzczZS00ZTdmLWIwYjUtYmJlYTU1OWRiNzIyIn19LCJuYmYiOjE2OTMyNzE2MjYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.RY__LmLzEtIWGE8-wL2kA1tAXO631ZqE-K-4DpKQL0g_MDRJx4c_5AoN5FU2QxATp2A7oWxqb6niHzGvVV4mh71xmPcMvhARoKECIrwYJRwY3-cLi-9OvVimcOpv000MuqBmqftV0XlO8MS2HlCVVysvZcmuhhsyK6nnYob0z-BrrS3G6Ue-XM999i7FfJbsgQN4JXHxlnjYsDFgFXw807UiRNpYYeDQw5rg5MtJnO5eOnCStvw4UeUYECDKZ7IANgDsGVp_d63MIOI-kY8lHf6Gml2TjYF3F7mHY08P6ey708lijcK2z3s_hrAU03hRKS67L8sNypUc3oWa3bAVuA

Connect to dashboard

Open a browser to the dashboard and enter the token above.

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

Change default namespace

Click the dropdown and select all namespaces.


Errors and issues

root@vm01:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-586l2 0/1 CrashLoopBackOff 3 (44s ago) 13m
kube-system coredns-5dd5756b68-7lt52 0/1 Unknown 4 22m
kube-system coredns-5dd5756b68-hlqcn 0/1 Unknown 2 22m
kube-system etcd-vm01 1/1 Running 13 (79s ago) 20m
kube-system kube-apiserver-vm01 1/1 Running 10 (69s ago) 23m
kube-system kube-controller-manager-vm01 1/1 Running 13 (79s ago) 20m
kube-system kube-proxy-5tgls 0/1 Error 11 (104s ago) 22m
kube-system kube-scheduler-vm01 1/1 Running 13 (79s ago) 20m
root@vm01:~# kubectl get pods --all-namespaces
Get "https://10.211.55.5:6443/api/v1/pods?limit=500": dial tcp 10.211.55.5:6443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=""
root@vm01:~# kubectl get pods --all-namespaces
The connection to the server 10.211.55.5:6443 was refused - did you specify the right host or port?
vagrant@vm01:~$ kubectl get nodes -o wide
E0828 10:45:58.354410 1797 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused