From Zero to Code: Kubernetes Dynamic NFS Storage

From Zero to Code: Kubernetes Dynamic NFS Storage

Preface

This is an ongoing series of blog posts where I take technologies I have had limited or no exposure to which are used around the industry, and look to get them working by pulling together lots of internet guides until I can provide an end to end working solution.

Remember this is my Journey, It might not be yours...

I'm on a voyage of discovery, and so far I've found a rocky coastline of poorly written guides, misinformation, out of date information, products that don't work for me and half baked documents. DevOps and DevOps tools are rapidly changing, what works for me, may not work for you, your goals are not my goals, your learning won't be the same as mine.

I write these from the position as to how I solve problems when learning new products. Can I get it to work? Can I scale it? How do I secure it? Is it failover? How do I monitor it? How do I automate it? This for me always starts with can I do the basics I need to with the product?

This is written because some will tell me "I'm doing it wrong" but don't help in "doing it right" and for me, if I set a target and that target is achieved. Well, that's a step in the right direction.

You will need

To run with this blog you'll need

  • 3 Ubuntu 20.04 Servers - 2vCPU, 2Gb Ram, 20 Gb HDD
  • SSH Client
  • Ability to Access the internet from these servers
  • Patience
  • A ProGet Repo setup as per the previous tutorial (or something you can run from Docker.
  • N8n Image on the proget tutorial
  • An NFS Server

What's in this post?

Within this post I'll be exploring the following:

  • Setting up a Persistent Volume (PV)
  • Setting up a Persistent Volume Claim (PVC)
  • Packaging this up in a Helm Chart
  • Using a Storage Class to Dynamically provision storage.

Lets Begin

This post is broken up into two different methods of delivering persistent storage to a Kubernetes cluster using NFS.

there are many different other back end storage providers out there, AWS, Google, Microsoft all provide them on their platforms and then there are providers like CEFT and ScaleIO as third party examples.

The whole point of this is because out of the box, containers are supposed to blow away everything, including data when you restart them. If you're creating proxies or web servers this might not be an issue as they are immutable items, tear them down, build them back up.

If you're however deploying applications onto a K8 cluster like my example N8N or Mattermost then you'd probably like to keep the data on the database after a reboot.

To do this a location outside of the container needs to be used to hold that data and then when the container is sprung back into life it is mounted to have visibility of the volume the data is sat on and therefore the data.

This external storage is referred to as Persistent Storage.

I've chosen to use NFS as my persistent storage as its easy to setup, its stable and I already use it on my network.

Two ways to skin this cat

Method 1: The Sysadmin Way

Persistent Storage

For the purpose of this post, and because Kubernetes is running on bare metal I'll be looking at NFS volumes and Persistent Volume Claims

There is a really good write up and explanation of this and other options here:

NFS

NFS or the network file system is a UNIX protocol that allows you to mount any file system. The file system can be defined in a YAML file and then connected to and mounted as your volume.

If a Pod goes down or is removed, an NFS volume is simply unmounted, but the data is will still be available and unlike an emptydir it is not erased. However if you take a look at the NFS example in the documentation, it says you need to create a Persistent Volume Claim first and to not directly mount a volume with NFS.

Persistent Volume Claims

With a Persistent Volume Claim, the Pod can connect to volumes where ever they are through a series of abstractions. The abstractions can provide access to underlying cloud provided back-end storage volumes, or in the case of bare metal, on-prem storage volumes.

An advantage of doing it this way is that an Administrator can define the abstraction layer. This allows developers to obtain the volume ID and the NFS type through an API without actually having any of those details. This additional abstraction layer on top of the physical storage is a convenient way to separate Ops from Dev. Developers can instead use PVC to access the storage that they need while developing their services.

These are the parts to a persistent volume claim:

  • Persistent Volume Claim - a request for storage and mount it to a Pod dynamically without having to know the backend provider of the volume.
  • Persistent Volume - the specific volume being called as outlined in the claim as provisioned by an Administrator. These are not tied to a particular Pod and are managed by Kubernetes.
  • Storage class - allows dynamic storage allocation which is the preferred ‘self serve’ method for developers. Classes are defined by administrators.
  • Physical storage - the actual volume that is being connected to and mounted.

Getting Started

Three YAMLs are required to set this all up (names are mine, call the yaml files what you want)

  1. Create the Persistent Volume (pv.yaml)
  2. Create a Persistent Volume Claim (pvc.yaml)
  3. Update your deployment Yaml to use the Persistent Volume
NOTE: you will need an NFS server to connect to.

Personally, I see this as the following:

  • A PersistentVolume is the underlying Kubernetes storage blob, in this case, sat on top of an NFS share on an NFS server which can be shared across the nodes
  • A PersistentVolumeClaim is code that takes a chunk of that storage blog and allocates its use to an application.
  • Applications refer to the PVC and define how their data  is stored on the PV (locations, which files, mounting container volumes to host directories)

Install the NFS client

All your Kubernetes nodes will need the NFS client installed on them for this to work.

sudo apt install nfs-client

Create a Persistent Volume (PV)

First, a Persistent Volume needs to be created

pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  mountOptions:
    - hard
  nfs:
    path: /Dev1Partition1/kubernetes
    server: 192.168.40.202

Les break this down

There are links at the end of this post that will provide further reading on this.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv

Define the type of YAML file this is and give the PersistentVolume a name/tag which can be referred to by other configurations

spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany

As we are defining a persistent volume that could be used by multiple applications the size of the volume is defined (10Gb) the type of Volume and how it is accessed.

  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  mountOptions:
    - hard

Also, define what to do with the data after the volume or is taken down, i've used retain because I'd like to keep this post teardown. What type of storage will the persistent volume use? In this case, NFS and is it a hard or a soft mount. (fnar)

  nfs:
    path: /Dev1Partition1/kubernetes
    server: 192.168.40.202

How to mount the NFS share, the path is the one defined in the exports file on the NFS server and the server is the IP of the NFS server, this section is the equivalent of the mount command

mount -t nfs 192.168.40.202:/Dev1Partition1/kubernets /mnt/nfsshare

It's the SERVER information being provided here so Kubernetes can mount it.

pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

Let's break this down

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc

Again, letting Kubernetes know this YAML file has a purpose and the name/tag by which what is being set up can be referred to.

spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany

Defining what type of PV it is looking to claim space on, and how it wants to use it

  resources:
    requests:
      storage: 1Gi

The amount of storage that is being laid claim to on the PV.

My understanding here is that the PVC is made per app and the PV is a per cluster thing. So each app will need to have its own PVC claim defined and the amount of storage used.

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: n8n-server
  name: n8n-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      run: n8n-deployment
  template:
    metadata:
      labels:
        run: n8n-deployment
    spec:
      volumes:
        - name: n8n-pv-storage
          persistentVolumeClaim:
            claimName: nfs-pvc
      containers:
      - image: 192.168.40.43:8081/dockercontainers/library/n8n:latest
        name: n8n-server
        volumeMounts:
          - mountPath: "/root/.n8n"
            name: n8n-pv-storage
            subPath: n8n
      imagePullSecrets:
        - name: progercred

Let's break this down

I've kept using the same deployment file used in all the zero to code posts.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: n8n-server
  name: n8n-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      run: n8n-deployment
  template:
    metadata:
      labels:
        run: n8n-deployment

Nothing changed here from a standard deployment

    spec:
      volumes:
        - name: n8n-pv-storage
          persistentVolumeClaim:
            claimName: nfs-pvc

The Volume is named n8n-pv-storage (use whatever name you want) and then link the deployment to the PVC using the PVC name in the above YAML

      containers:
      - image: 192.168.40.43:8081/dockercontainers/library/n8n:latest
        name: n8n-server
        volumeMounts:
          - mountPath: "/root/.n8n"
            name: n8n-pv-storage
            subPath: n8n
      imagePullSecrets:
        - name: progercred

And finally, the magic happens...

VolumeMounts is added, and I'll refer to the docker command

docker run -v <host folder>:<container folder>

Using the docker command a folder within the container can be mounted on the docker host.

This is what I'm doing in Kubernetes

  • The mountPath is the path within the K8s container you'd like to expose to the host.
  • The name lets Kubernetes know to use the above-named n8n-pv-storage which in turn used the PVC to claim space on the PV
  • The subPath lets Kubernetes know when it mounts the containers /root/.n8n folder to do it on /Dev1Partition1/kubernetes/n8n and create n8n if it doesn't exist.

To use all of these the following commands can be run

kubectl create -f pv.yaml
kubectl create -f pvc.yaml
kubectl create -f deployment.yaml

Run the following to see what has happened under the hood.

kubectl get pods

Outputs the name defined at the top of the deployments.yaml file

NAME                          READY   STATUS    RESTARTS   AGE
n8n-deployment-6b555c49c5-kpgq9   1/1     Running   0          88m

Then run

kubectl describe pod n8n-deployment

Provides

The useful information here is:

Mounts:
  /root/.n8n from n8n-pv-storage (rw,path="n8n")
  /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6nlrt (ro)
Volumes:
  n8n-pv-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nfs-pvc
    ReadOnly:   false

If you head over to the NFS server the content should have created the n8n folder and put data in it.

Adding to your helm file

It is also possible to have the pvc.yaml and deployment.yaml in a single file for your helm install your Helms deployment.yaml should look like this

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: n8n-server
  name: n8n-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      run: n8n-deployment
  template:
    metadata:
      labels:
        run: n8n-deployment
    spec:
      volumes:
        - name: n8n-pv-storage
          persistentVolumeClaim:
            claimName: nfs-pvc
      containers:
      - image: 192.168.40.43:8081/dockercontainers/library/n8n:latest
        name: n8n-server
        volumeMounts:
          - mountPath: "/root/.n8n"
            name: n8n-pv-storage
            subPath: n8n
      imagePullSecrets:
        - name: progercred

Once the helm deployment.yaml run the following to package and update the ProgGet repo

helm package --version 0.1.2 n8n
curl http://192.168.40.43:8081/helm/helmdev --user david:MyStupidPassWord --upload-file n8n-0.1.2.tgz

What did we learn?

  • How to create a PermenentVolume (PV)
  • How to create a PermenentVolumeClaim (PVC)
  • How to use the PVC within a deployment
  • How to mount a container directory on the host directory.
  • How to run this
  • How to test this
  • How to wrap this up in a Helm deployment.yaml
  • How to Upgrade the Helm package
  • How to push the upgrade to proget

Method 2: The Devops Way

Using Dynamic storage

The above method is pretty static, it needs the PersistentVolume setup and the developer to know a few things before the deployment can be done.

The "preferred method"  seems to be the Dynamic storage method where the deployment.yaml just needs to refer to an existing

To get this working I've referred a lot to two primary data sources

Just me and Opensource on Youtube

Before you continue to YouTube

This Blog post on Exxact

Deploying Dynamic NFS Provisioning in Kubernetes | Exxact Blog
Exxact

I'd have got nowhere without them.

BEFORE YOU START.. READ THIS OR IT WILL ALL FAIL

About halfway through many tutorials, I was getting to a point where the deployment of the NFS provisioned was pending and wouldn't change. Using the command

kubectl logs pod <pod name>

I could see the error

"unexpected error getting claim reference: selfLink was empty, can't make reference"

It seems in version 1.20.0 of Kubernetes the selfLink functionality was removed and the fudge workaround for it is

Current workaround is to edit /etc/kubernetes/manifests/kube-apiserver.yaml

Under here:

spec:
  containers:
  - command:
  - kube-apiserver

Add this line:

- --feature-gates=RemoveSelfLink=false

The do this:

sudo kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
sudo kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml

(I had to do it twice to get it to work)

This is outlined at

Using Kubernetes v1.20.0, getting “unexpected error getting claim reference: selfLink was empty, can’t make reference” · Issue #25 · kubernetes-sigs/nfs-subdir-external-provisioner
Using Kubernetes v1.20.0 Attempting to create pvc&#39;s, they all remain in &quot;pending&quot; status Doing a kubectl logs nfs-client-provisioner gives this: I1210 14:42:01.396466 1 leaderelection...

The end of which has been updated to state

No more workarounds please :-) - this issue was closed for a reason guys! Just use new available docker image: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0 and it works fine. Do not edit kube-apiserver.yaml, there is no need to.

Now that's fixed.. Let's continue

There are 4 YAML files that are needed to get this working

rbac.yaml

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

Let's break this down...

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

There is a lot going on in this file however at a basic level what is happening is the provisioner container that we will install needs rights to do things. This file sets up those rights. It sets up the minimum rights needed as roles to ensure that the provisioner container can mount NFS as needed.

class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: homelan/nfs
parameters:
  archiveOnDelete: "false"

Let's break this down...

The class will create a storage class, a definition of what to do if this storage is type is chose. It's possible to have many different storage classes on a cluster, at a basic level they could be fast/slow storage or local/cloud storage

metadata:
  name: managed-nfs-storage

The name managed-nfs-storage will be the trigger name to invoke this storage class and use it.

provisioner: homelan/nfs

Many of the guides I looked at this us set to example.com/nfs however it seems to be a reference name were using the location and type make debugging easier.

pvc-nfs.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-sc
   spec:
  storageClassName: managed-nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

Let's break this down...

This is where the developer might get involved. In the first method of using a Persistent Volume (PV) and a Persistent Volume Claim (PVC) the PV was launched then the PVC was bound to it.

metadata:
  name: pvc-nfs-sc

This is the name used in a deployment.yaml or pod.yaml to invoke this PVC

  storageClassName: managed-nfs-storage

This links back to the name we used in the class.yaml

  resources:
    requests:
      storage: 1Gi

This PVC is looking to use 1gb of storage on the NFS server

Where is the pv.yaml?

It's not needed. We will deploy a container which when the PVC is called, the storage class will dynamically do all the PV stuff in the background, so a sysadmin wouldn't need to.

deployment.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:    
  selector:
    matchLabels:
      app: nfs-client-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: homelan/nfs
            - name: NFS_SERVER
              value: 192.168.40.242
            - name: NFS_PATH
              value: /Dev1Partition1/kubernetes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.40.242
            path: /Dev1Partition1/kubernetes

Let's break this down...

This  is the deployment file that pulls the container down which does the persistent volume provisioning dynamically for us

  name: nfs-client-provisioner

Providing the name of the provisioner

      serviceAccountName: nfs-client-provisioner

Refers back to the RBAC YAML files

        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes

Pulling a docker container and having a mountpath inside the container of /persistentvolumes.

          env:
            - name: PROVISIONER_NAME
              value: homelan/nfs
            - name: NFS_SERVER
              value: 192.168.40.242
            - name: NFS_PATH
              value: /Dev1Partition1/kubernetes

Defining some docker environment variables letting the container know the name of the provisioner (remember this is an aribitury name, many people use example.com/nfs i used my location/type format)

The NFS server and path on the NFS server are added

      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.40.242
            path: /Dev1Partition1/kubernetes

The container volumes to mount the NFS Share /Dev1Partition1/kubernetes to the container folder /persistentvolumes

Deploy the YAML Files

Let's deploy the YAML Files

First RBAC

kubectl create -f rbac.yaml

Check it

 kubectl get clusterrole,clusterrolebinding,role,rolebinding | grep nfs

Returns

clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner                                          2021-06-21T08:25:46Z
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner                             ClusterRole/nfs-client-provisioner-runner                                          5h33m
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner   2021-06-21T08:25:46Z
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner   Role/leader-locking-nfs-client-provisioner   5h33m

This looks a mess on here and looked a mess on the screen too.

All the roles and cluster roles were deployed successfully

Next the Storage Class

kubectl create -f class.yaml 


storageclass.storage.k8s.io/managed-nfs-storage created

Check it

kubectl get storageclass 
managed-nfs-storage homelan/nfs Delete Immediate  false       137m

Deploy the provisioning container

Next, pull down the provisioning container image

kubectl create -f deployment.yaml 


deployment.apps/nfs-client-provisioner created

Check it

kubectl get all

Will return

NAME                                         READY   STATUS    RESTARTS  
pod/nfs-client-provisioner-866d47677-wt5k7   1/1     Running   3         

Further investigation using

 kubectl describe pod nfs-client-provisioner

Amongst all the information you should find

    Environment:
      PROVISIONER_NAME:  homelan/nfs
      NFS_SERVER:        192.168.86.202
      NFS_PATH:          /Dev1Partition1/kubernetes
    Mounts:
      /persistentvolumes from nfs-client-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7p66m (ro)

Showing the expected mount

      /persistentvolumes from nfs-client-root (rw)

From this point on will only work if the fix above has been applied.

Create a Persistent Volume

For this example I will use the n8n-deployment.yaml file to deploy N8N with a dynamic storage volume.

First, check if there any existing PV or PVC on the cluster

kubectl get pv,pvc 


No resources found in default namespace.

n8n-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: n8n-server
  name: n8n-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      run: n8n-deployment
  template:
    metadata:
      labels:
        run: n8n-deployment
    spec:
      volumes:
        - name: n8n-pv-storage
          persistentVolumeClaim:
            claimName: pvc-nfs-sc
            
      containers:
      - image: 192.168.40.43:8081/dockercontainers/library/n8n:latest
        name: n8n-server
        volumeMounts:
          - mountPath: "/root/.n8n"
            name: n8n-pv-storage
            subPath: n8n
      imagePullSecrets:
        - name: progercred

Let's break this down...

The deployment file has only changed in one line

      volumes:
        - name: n8n-pv-storage
          persistentVolumeClaim:
            claimName: pvc-nfs-sc

the line

claimName: pvc-nfs-sc

This is the same name we used in the pvc.nfs.yaml file

Deploy this

kubectl create -f n8n-deployment.yaml

Check it

kubectl get pods

returns

NAME                                     READY   STATUS    RESTARTS 
n8n-deployment-67b99fbbd8-pp7s4          1/1     Running   0             nfs-client-provisioner-866d47677-wt5k7   1/1     Running   0     

If we look at the container mounts section of the deployed pod

 kubectl describe pod n8n-deployment

look for

Mounts:
  /root/.n8n from n8n-pv-storage (rw,path="n8n")
  /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mmjnk (ro)

Head over to the NFS server and have a look at the NFS Share folder

ls /Dev1Partition1/kubernetes

returns

default-pvc-nfs-sc-pvc-38ae69b6-e9c3-4b98-8542-403088341835

inside here is my N8N folder

Add another Dynamic storage PVC

If I want to add another PVC I just need to create a new pvc-nfs.yaml

pvc-nfs2.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-sc2
   spec:
  storageClassName: managed-nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

Add a new Name under Metadata

metadata:
  name: pvc-nfs-sc2

Then in my n8n-deployment change the claim name to reflect this

      volumes:
        - name: n8n-pv-storage
          persistentVolumeClaim:
            claimName: pvc-nfs-sc2

to reflect this

This would then create a new folder on the NFS server with a PV attached to it..

Set the default storage class

If you run more than one storage class, you may want to set one of them to be the default storage class to use. to do this

Find the storage class name

kubectl get storageclass

returns

NAME                 PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ...
managed-nfs-storage  homelan/nfs Delete        Immediate         ...

Run

kubectl patch storageclass managed-nfs-storage  -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Get the storage class again

kubectl get sc

this time

managed-nfs-storage (default)

Tear it down

With this configuration, I can run

kubectl delete pod n8n-deployment 

Deleting the pod will delete the pod but not the PV and PVC. This will have to be done separately. This is good because it means we can remove a POD, redeploy it possibly with an update to it and the data will remain in place.

To delete the PVC and the PV run

kubectl get pvc, pv

To show the names then run

kubectl delete pvc pvc-nfs-sc

Deleting the PVC will dynamically delete the PV as well..

Done..

Thoughts

This has been the hardest thing so far to get my head around and get working in this Kubernetes Journey. It's so so so poorly documented with so many assumptions it's unbelievable there is also a major change in Kubernetes 1.20.x + which means that 99.9% of the instructions sets including the ones I linked to don't work without a "fudge"

I'm going to be brutally honest, I've written this all up to remind me, however, It might all be complete "it worked well for me" rubbish. I'm going to need to refer to someone who knows what they are doing to double-check this.

I can tell you all the code and instructions above DO work for me, more than once.

Further Reading

How to Configure NFS based Persistent Volume in Kubernetes
Tutorial on how to configure and use nfs based persistent volume in Kubernetes pods.
Volumes
On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. One problem is the loss of files when a container crashes. The kubelet restarts the container but with a clean state. A second problem occurs when sharing files between c…
Persistent Volume Claims with different subpaths · Issue #20466 · kubernetes/kubernetes
I&#39;ve set up a persistent volume using an NFS mount (NFS server is managed by me). I&#39;ve also been able to create a PersistentVolumeClaim using the above NFS volume and launch a pod that uses...
Using NFS - Configuring Persistent Storage | Installation and Configuration | OpenShift Enterprise 3.1
Shared NFS and SubPaths in Kubernetes
In a previous update, I talked about setting up a service specific NFS mount path using a synology diskstation, and left getting shared…
Using Kubernetes v1.20.0, getting “unexpected error getting claim reference: selfLink was empty, can’t make reference” · Issue #25 · kubernetes-sigs/nfs-subdir-external-provisioner
Using Kubernetes v1.20.0 Attempting to create pvc&#39;s, they all remain in &quot;pending&quot; status Doing a kubectl logs nfs-client-provisioner gives this: I1210 14:42:01.396466 1 leaderelection...

Share Tweet Send
0 Comments
Loading...
You've successfully subscribed to Tech Blog Posts - David Field
Great! Next, complete checkout for full access to Tech Blog Posts - David Field
Welcome back! You've successfully signed in
Success! Your account is fully activated, you now have access to all content.