Skip to main content

NFS storage

Provisioning, mounts, and PVC templates.

  • Process to deploy NFS via helm

  • Ensure that NFS common is installed on each of the nodes in the cluster: apt install nfs-common -y

  • On your NAS add the IP subnet or each node IP address to the NAS shared folder called k8s under the NFS host access permissions. Ensure the read/write permission is selected otherwise the test pod will not start properly with an error: ProvisioningFailed: read-only file system

  • Create a values file specifying the path and server:

    vi ~/manifests/nfs-provisioning-nfs-subdir-external-provisioner-values.yaml
    nfs:
    path: /k8s
    server: nas.reids.net.au
    storageClass:
    defaultClass: true
  • Install referencing the values file:

    helm upgrade --install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --namespace nfs-provisioning --create-namespace \
    --values ~/manifests/nfs-provisioning-nfs-subdir-external-provisioner-values.yaml
    Release "nfs-subdir-external-provisioner" does not exist. Installing it now.
    NAME: nfs-subdir-external-provisioner
    LAST DEPLOYED: Sun Oct 5 15:54:16 2025
    NAMESPACE: nfs-provisioning
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
  • Confirm pod creation:

    kubectl get pods -n nfs-provisioning -o wide
    NAME                                               READY   STATUS    RESTARTS   AGE   IP             NODE       NOMINATED NODE   READINESS GATES
    nfs-subdir-external-provisioner-666c67448b-2bhks 1/1 Running 0 55s 10.70.200.12 dev-w-p1 <none> <none>
  • Confirm storage class

    kubectl get storageclass
    NAME                   PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 55s
  • Functional test — create a PVC + Pod

    • Create a namespace called: dev
    kubectl create namespace dev
    namespace/dev created
    • Create a simple test manifest: vi nfs-test-dev.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
    name: dev
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: nfs-test-pvc
    namespace: dev
    spec:
    accessModes:
    - ReadWriteMany
    resources:
    requests:
    storage: 1Gi
    storageClassName: nfs-client
    ---
    apiVersion: v1
    kind: Pod
    metadata:
    name: nfs-test-pod
    namespace: dev
    spec:
    containers:
    - name: test
    image: busybox
    command: ['sh', '-c', 'echo Hello from NFS in DEV > /mnt/testfile && sleep 3600']
    volumeMounts:
    - name: nfs-volume
    mountPath: /mnt
    volumes:
    - name: nfs-volume
    persistentVolumeClaim:
    claimName: nfs-test-pvc
  • Apply

    kubectl apply -f nfs-test-dev.yaml
    Warning: resource namespaces/dev is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
    namespace/dev configured
    persistentvolumeclaim/nfs-test-pvc created
    pod/nfs-test-pod created
  • Check PVC:

    kubectl -n dev get pvc
    NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    nfs-test-pvc Bound pvc-22200691-234e-43e7-a9c7-73e6a810cf99 1Gi RWX nfs-client <unset> 52s
  • Check that the test file has been written. Expect to see: Hello from NFS in DEV

    kubectl -n dev exec -it nfs-test-pod -- cat /mnt/testfile
    Hello from NFS in DEV
  • Clean up

    kubectl delete pod nfs-test-pod -n dev
    kubectl delete pvc nfs-test-pvc -n dev
    pod "nfs-test-pod" deleted
    persistentvolumeclaim "nfs-test-pvc" deleted
    kubectl get all -A | grep nfs-test || echo "✅ No test resources remain"
    ✅ No test resources remain

NFS Summary

The NFS setup provides dynamic storage provisioning through the nfs-subdir-external-provisioner Helm chart. By linking the NAS shared folder (/k8s) with appropriate read/write access for all cluster nodes, Kubernetes can automatically create and manage persistent storage directories for workloads. After deployment, verification confirmed that the NFS provisioner, storage class, and test pod functioned correctly, demonstrating successful write access to the NAS. The environment was then cleaned up, leaving no residual resources.