Preface
This is an ongoing series of blog posts where I take technologies I have had limited or no exposure to which are used around the industry, and look to get them working by pulling together lots of internet guides until I can provide an end to end working solution.
In the last post, I got a local 3 node Kubernetes cluster working, I purposely used kubectl and kubeadmin as they are generic enough as I understand it to be useful on self-hosted or most cloud deployments of Kubernetes. Other options do exist like MicroK8s and Minikube however for me, they add another layer of abstraction that I didn’t need.
This post builds on the last post setup of 1 x k8master and 2 k8nodes and looks to see how I can do certain things with Kubernetes.
Remember this is my Journey, It might not be yours…
I’m on a voyage of discovery, and so far I’ve found a rocky coastline of poorly written guides, misinformation, out-of-date information, products that don’t work for me and half-baked documents. DevOps and DevOps tools are rapidly changing, what works for me, may not work for you, your goals are not my goals, and your learning won’t be the same as mine.
I write these from the position of how I solve problems when learning new products. Can I get it to work? Can I scale it? How do I secure it? Is it failover? How do I monitor it? How do I automate it? This for me always starts with can I do the basics I need to with the product?
This is written because some will tell me “I’m doing it wrong” but don’t help in “doing it right” and for me, if I set a target and that target is achieved. Well, that’s a step in the right direction.
You will need
To run with this blog you’ll need
- 3 Ubuntu 20.04 Servers – 2vCPU, 2Gb Ram, 20 Gb HDD
- SSH Client
- Ability to Access the internet from these servers
- Patience
- A ProGet Repo setup as per the previous tutorial (or something you can run from Docker.
- N8n Image on the proget tutorial
What’s in this post?
Within this post I’ll be exploring the following:
- Using HELM Charts to deploy the N8N app
Lets Begin
Helm Charts
Helm is at its basic level the equivalent of Apt-get, Yum, DNF, Chocolatey or Brew for Kubernetes. Using a simple command-line helm will install from the cloud or private repositories and manage the install of the helm chart.
A helm chart is everything you need to install the container software wrapped up in folders and mostly YAML files
Installing Helm
Helm has a lot of ways to install itself on your OS and they are covered here
As I’ve used Ubuntu 20.04 for the server I’ll be using the instructions are
sudo snap install helm
Creating a Helm Chart – Nginx
As a first example, we will set up a simple NginX server. Using Helm to pull down the container from docker hub, set up the services and deploy this to our K8s cluster.
cd .mkdir helmcd helm
helm create nginxchart
The nginxchart is a name, it could just as easily be nginx, b175fyshe324 or webserver
The helm create command will create the following files and folders
nginxchart├── Chart.yaml├── charts├── templates│ ├── NOTES.txt│ ├── _helpers.tpl│ ├── deployment.yaml│ ├── hpa.yaml│ ├── ingress.yaml│ ├── service.yaml│ ├── serviceaccount.yaml│ └── tests│ └── test-connection.yaml└── values.yaml
I’ll cover these in more detail in the next stage, for this example the only file needed is values.yaml
It looks like this, I’ve removed some of the comments to make it easier to read. The best resource I’ve found for describing this file is here:
https://opensource.com/article/20/5/helm-charts
# Default values for nginxchart.# This is a YAML-formatted file.# Declare variables to be passed into your templates.replicaCount: 1image: repository: nginx pullPolicy: Always # Overrides the image tag whose default is the chart appVersion. tag: ""imagePullSecrets: []nameOverride: "nginxapp"fullnameOverride: "nginxchart"serviceAccount: create: true name: "nginxchart"podAnnotations: {}podSecurityContext: {}securityContext: {}service: type: ClusterIP port: 80ingress: enabled: false className: "" annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.localresources: {} # limits: # cpu: 100m # memory: 128Mi # requests: # cpu: 100m # memory: 128Miautoscaling: enabled: false minReplicas: 1 maxReplicas: 100 targetCPUUtilizationPercentage: 80 # targetMemoryUtilizationPercentage: 80nodeSelector: {}tolerations: []affinity: {}
For this example here are the changes from the default which have been made
image: repository: nginx pullPolicy: Always # Overrides the image tag whose default is the chart appVersion. tag: ""
The image section has two things you need to look at: the repository where you are pulling your image and the pullPolicy.
The pullPolicy is set to IfNotPresent; which means that the image will download a new version of the image if one does not already exist in the cluster.
There are two other options for this:
- Always, which means it will pull the image on every deployment or restart (I always suggest this in case of image failure), and
- Latest, which will always pull the most up-to-date version of the image available.
Latest can be useful if you trust your image repository to be compatible with your deployment environment, but that’s not always the case.
imagePullSecrets: []nameOverride: "nginxapp"fullnameOverride: "nginxchart"
The first override here is imagePullSecrets, which is a setting to pull a secret, such as a password or an API key you’ve generated as credentials for a private registry. (This will be used with the N8N example later)
Next are nameOverride and fullnameOverride.
From the moment you ran helm create, its name (buildachart) was added to a number of configuration files—from the YAML ones above to the templates/helper.tpl file.
If you need to rename a chart after you create it, this section is the best place to do it, so you don’t miss any configuration files.
serviceAccount: create: true name: "nginxchart"
Service accounts provide a user identity to run in the pod inside the cluster. If it’s left blank, the name will be generated based on the full name using the helpers.tpl file. It appears to recommend always to have a service account set up so that the application will be directly associated with a user that is controlled in the chart.
As an administrator, if you use the default service accounts, you will have either too few permissions or too many permissions, so change this.
Installing a local Helm chart
Having made no other changes to files, the following Helm command can be run
helm install --name endpoints path/to/chart/endpoints
helm install nginxchart nginxchart/ --values nginxchart/values.yaml
Viewing the results of the Helm install
This will deploy nginx, the nginx service as we can see using Lens




There will also be some output after the command
NAME: nginxchartLAST DEPLOYED: Mon Jun 14 15:27:40 2021NAMESPACE: defaultSTATUS: deployedREVISION: 1NOTES:1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=nginxapp,app.kubernetes.io/instance=nginxchart" -o jsonpath="{.items[0].metadata.name}") export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
Run the following commands on the k8master node
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=nginxapp,app.kubernetes.io/instance=nginxchart" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
kubectl port-forward $POD_NAME 8080:80Forwarding from 127.0.0.1:8080 -> 80Forwarding from [::1]:8080 -> 80
Should allow for the following
Test
curl http://127.0.0.1:8080
and the HTML output
<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p><p>For online documentation and support please refer to<a href="http://nginx.org/">nginx.org</a>.<br/>Commercial support is available at<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p></body></html>
NginX has been deployed using the helm command
Tear it down
Tearing all this down is equally simple too…
helm delete nginxchartrelease "nginxchart" uninstalled
Removes everything helm installed.
Observations
If I run the helm install command a second time
helm install nginxchart nginxchart/ --values nginxchart/values.yaml
I get an error message
Error: cannot re-use a name that is still in use
What if we wanted to change a single value in the values.yaml
file? We can do that easily with the set command.
helm install --name endpoints path/to/chart/endpoints --set image.project=my-project
Changed. This is why you put all of your variables into the values.yaml
file. (more on this later)
Want to see output before it is installed to ensure things look correct?
Try the following command.
helm install --name endpoints path/to/chart/endpoints --set image.project=my-project --dry-run --debug
By adding the dry-run
and debug
flags I’ve stopped the Helm Chart from installing and viewed the output.
What did we learn?
- Install Helm
- Deploy NginX by just editing the values.yaml file
- How to Install using helm install
- How to tear down an install using helm delete
Creating a Helm Chart using existing N8N Yaml Files
The NginX is a very high-level view of what is needed to get a helm instance working. the outcome of this section is to get the N8N Container working, and this can be done using the deployment and ingress YAML Files I’ve already created.
In the previous tutorial, there were two YAML files the deployment and ingress files which were used with kubectl create to bring up the N8N service.
n8n-deployment.yaml
apiVersion: apps/v1kind: Deploymentmetadata: labels: run: n8n-deployment app: n8n-deployment name: n8n-deploymentspec: replicas: 1 selector: matchLabels: run: n8n-deployment template: metadata: labels: run: n8n-deployment spec: containers: - image: 192.168.40.43:8081/dockercontainers/library/n8n:latest name: n8n-server imagePullSecrets: - name: progercred
n8n-traefik-ingress.yaml
apiVersion: traefik.containo.us/v1alpha1kind: IngressRoutemetadata: name: n8n-traefik namespace: defaultspec: entrypoints: - n8n routes: - match: Host(`n8n.lan`) kind: Rule services: - name: n8n-deployment port: 5678
These files can be used as-is within a helm chart to deploy N8N
Create the helm files and folders
helm create n8n
This will create as before the following directory tree
n8n├── Chart.yaml├── charts├── templates│ ├── NOTES.txt│ ├── _helpers.tpl│ ├── deployment.yaml│ ├── hpa.yaml│ ├── ingress.yaml│ ├── service.yaml│ ├── serviceaccount.yaml│ └── tests│ └── test-connection.yaml└── values.yaml
This time the templates\deployment.yaml and templates\ingress.yaml will be replaced with the n8n-deployment.yaml and n8n-traefik-ingress.yaml
cp n8n-deployment.yaml n8n/tempaltes/deployment.yaml
cp n8n-traefik-ingress.yaml n8n/templates/ingress.yaml
The values.yaml file then needs editing
image: repository: 192.168.40.43:8081/dockercontainers/library/n8n:latest pullPolicy: Always # Overrides the image tag whose default is the chart appVersion. tag: ""
Update the Image section with the local Proget Repository URL and change the pullPolicy to Always
imagePullSecrets: - name: progetcrednameOverride: "n8n-app"fullnameOverride: "n8n-chart"
Use the progetcred secret which was created using the following kubectl command line in the last post.
kubectl create secret docker-registry progercred --docker-server=http://192.168.40.43:8081 --docker-username=david --docker-password=MyStupidPassword
serviceAccount: # Specifies whether a service account should be created create: true # Annotations to add to the service account annotations: {} # The name of the service account to use. # If not set and create is true, a name is generated using the fullname template name: "n8nsvc"
Create a service name of “n8nsvc”
service: type: ClusterIP port: 5678
Under the port number change to 5678 so the n8n ingress port is used and make sure ClusterIP is set as the type.
ingress: enabled: true className: "" annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" hosts: - host: n8n.lan paths: - path: / pathType: ImplementationSpecific
Finally, make sure the Ingress – enabled is set to true and the hosts are set to the desired reachable URL, in the example we used n8n.lan
Save and exit the file
Because N8N is running on a strange port I’ve also updated template\service.yaml to reflect this.
apiVersion: v1kind: Servicemetadata: name: {{ include "n8n.fullname" . }} labels: {{- include "n8n.labels" . | nindent 4 }} app: n8n-deployment app.kubernetes.io/instance: n8n-deploymentspec: type: {{ .Values.service.type }} ports: - port: {{ .Values.service.port }} targetPort: 5678 protocol: TCP selector: run: n8n-deployment
The change is made under the ports: section
ports: - port: {{ .Values.service.port }} targetPort: 5678 protocol: TCP name: n8nport
With these changes made and saved
Run the following command
helm install n8n-chart n8n/ --values n8n/values.yaml
After a few seconds, this will deploy
Using the values.yaml
By doing the above, I’m not really making use of the templating scheme and the items held in the values files.
The prevailing thought for everything I’ve read seems to be to get any variable out of the other YAML files and have it in the values.yaml and refer to it as the examples shown as a valuetag.
Pushing my N8N Helm Chart to my ProGet Repo
With a working helm chart in place, the next step is to package it up and push it to a repository, in this case, ProGet as a private repository.
helm package <the helm chart you created>
Using the N8N example from the directory above n8n run
heml package n8n
This will output
Successfully packaged chart and saved it to: /home/david/helm/n8n-0.1.0.tgz
Publish to ProGet
Before the package can be uploaded to ProGet a new feed needs to be created.
Click on Feeds in the Title Bar

Click on Create New Feed

Scroll down and Helm Charts will be displayed

Create a New Feed

The Feed is created
Clicking on Manage Feed will provide options for uploading

In this example, we will use Curl
curl http://192.168.40.43:8081/helm/helmdev --user david:MyStupidPassword --upload-file n8n-0.1.0.tgz
Will result in

Click on n8n

What did we learn?
- Create a helm chart
- Use existing YAML Files within the helm chart
- Use Values with the Helm chart
- Package the Helm Chart
- Create a Helm Feed in Proget
- Upload the packaged Helm file to Proget
Installing the Helm Chart from a Private (ProGet) repo
Add the ProGet Repo
helm repo add traefik http://192.168.40.43:8081/helm/helmdev/
Returns
"proget" has been added to your repositories
Update the repo
helm repo update
Returns
Hang tight while we grab the latest from your chart repositories......Successfully got an update from the "proget" chart repository...Successfully got an update from the "hashicorp" chart repository...Successfully got an update from the "traefik" chart repositoryUpdate Complete. ⎈Happy Helming!⎈
Search the proget helm repo
helm search repo proget
Returns
NAME CHART VERSION APP VERSION DESCRIPTION proget/n8n 0.1.2 1.16.0 A Helm chart for Kubernetes
Then run
helm install proget/n8n
Will install the helm chart from your self hosted repo
What did we learn?
- How to package an updated helm chart
- How to upload it to ProGet
- How to Install a helm repo
- How to search a helm repo
- How to install from the ProGet repo using helm.
Thoughts
Helm charts are 100% the way to go it would seem, the ability of any system to have a packed solution is a good thing, and Helm seems to work well. Of all the things I’ve been learning about K8s this so far has been the easiest.