In the last few posts in this series, I’ve gone through setting up a private Docker repository and using Jenkins with Docker Plugins to Build up from nothing a base image, patching it and creating/testing an N8N Docker image using a Jenkins pipeline.

zero to code – Tech Blog Posts – David Field
Linux, tech, gadgets, chromebook, chromeos, redhat, ansible, technology, android, tweaks, software, tools

In this post, I’ll be covering how to

  • Manually build a 3 Node (Master and 2 nodes) Kubernetes Cluster
  • Deploy N8N from my Proget Repo
  • Install Metallb and Traefik
  • Access N8N from outside the Kubernetes cluster

Date Version Change Notes
11 June 2021 1.0 Initial Post
14 June 2021 1.2 Updated post feedback Added to the end an explanation of why I chose Kubernetes and not MicroK8’s or something else

You will need

To run with this blog you’ll need

  • 3 Ubuntu 20.04 Servers – 2vCPU, 2Gb Ram, 40 Gb HDD
  • SSH Client
  • Ability to Access the internet from these servers
  • Patience
  • A ProGet Repo setup as per the previous tutorial (or something you can run from Docker.
  • N8n Image on the proget tutorial

Don’t worry if you don’t have the ProGet/N8N combination, you can use the N8N Image from Docker

A disclaimer of sorts

I’m learning some of this at the same time I’m writing these posts, the purpose of these posts is to help me remember what I did in 6 months time so I can come back to them as a reference. I make them public because they might help someone.

All the information is out there on the internet, and I’ll include all the links at the end I’ve used to get this working. However, I appreciate there may be a better method to get this working. If there is drop me a line on the discord server above and I’ll happily learn and update the post if it works.

During this post, I’ve not ventured into the realms of HTTPS, that will be the next post. the purpose of this post is to build a base from which to start doing other things like getting Certs from HCP Vault, Dynamic NFS for persistent storage and Using Jenkins to deploy to the K8s Cluster.

Everything has a beginning a middle and an end, let’s start at the beginning.

Install the Kubernetes Cluster

I’ve chosen to use Ubuntu 20.04 for this because it’s well documented, and works, the setup will have a docker back end for containers (yes I know its not going to be supported soon person on Reddit who will pick up on this and have a moan)

Installing a K8 1 Master and 2 Node cluster on Ubuntu 20.04

3 Machines need to be built on whatever platform you’re using, VMware, Virtualbox, Proxmod, HyperV etc

Use the following hostnames and make a note of your IP Addresses they should be static addresses or a permanent DHCP Lease

  • Machine 1 (Ubuntu 20.04 LTS Server) – k8master1 –
  • Machine 2 (Ubuntu 20.04 LTS Server) – k8node1 –
  • Machine 3 (Ubuntu 20.04 LTS Server) – k8node2 –
  • 2vCPU, 2Gb Ram, 40 Gb HDD

On all three machines run

sudo apt updatesudo apt dist-upgrade -y

Setup the Server Prerequisites

Set the hostname for each server

Run this command on master node

sudo hostnamectl set-hostname "k8master" 

Run this command on node 1

sudo hostnamectl set-hostname "k8node1"  

Run this command on node 2

sudo hostnamectl set-hostname "k8node2"

On each of the hosts edit the hosts files

sudo nano /etc/hosts

Edit the hosts file

Add the following at the end using YOUR IP ADDRESSES    k8master192.168.40.41    k8node1192.168.40.42    k8node2


If you run your own homelab  DNS then you can update the DNS accordingly

Disable Swap on each machine

Kubernetes needs swap disabled run the following on all 3 of the machines

sudo vi /etc/fstab

Towards the end of this file, there will be a line that reads similar to this

swap.img    none     swap    sw  0   0

Put a hash in front of the line like so

#swap.img    none     swap    sw  0   0

Save and Exit the file

This will stop the swap being used after the next reboot, to stop it being used now type

sudo swapoff -a

Setup IP Forwarding

Ubuntu needs IP forwarding enabled on all 3 nodes

sudo vi /etc/sysctl.conf

Add this line to the end of the file


Save and Exit the file

Run the following

sudo sysctl -p

This should return

net.ipv4.ip_forward = 1

Install Docker

Kubernetes needs an underlying container system to manage, for this example Docker will be used.

On each of the 3 machines run the following commands

sudo apt install apt-transport-https ca-certificates curl software-properties-common

curl -fsSL | sudo apt-key add -

sudo add-apt-repository "deb [arch=amd64] focal stable"

sudo apt update

sudo apt install docker-ce

sudo systemctl enable docker

sudo systemctl restart docker

sudo systemctl status docker

This should return

docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-05-19 17:00:41 UTC; 17s agoTriggeredBy: ● docker.socket   Docs:   Main PID: 24321 (dockerd)  Tasks: 8 Memory: 46.4M CGroup: /system.slice/docker.service         └─24321 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Docker is installed.

Use docker as a non root user

sudo usermod -aG docker ${USER}

Install Kubernetes Tools

The following tools need to be installed to get Kubernetes up and running.

kubectl , kubelet and kubeadm

Run the following commands on all 3 servers

sudo apt install -y apt-transport-https curl curl -s | sudo apt-key add sudo apt-add-repository "deb kubernetes-xenial main" 

Update to add the new repository

sudo apt update 

Install the tools

sudo apt install -y kubelet kubeadm kubectl


Initialise Kubernetes on the Master Node

On the k8master node ONLY the Kubernetes cluster needs to be initialised with the following command

sudo kubeadm init

This will result in a lot of text

Image from

With the build being successful the following 3 commands need to be run NOT as root and NOT using sudo

mkdir -p $HOME/.kube 

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config ****

sudo chown $(id -u):$(id -g) $HOME/.kube/config

With the Master node initialised the 2 worker nodes need to join the cluster which is outlined in the last output line.

Run this on k8node1 and k8node2


Your token number will be different so don’t just copy and paste this

sudo kubeadm join --token b4sfnc.53ifyuncy017cnqq --discovery-token-ca-cert-hash sha256:5078c5b151bf776c7d2395cdae08080faa6f82973b989d29caaa4d58c28d0e4e

The resulting output should be fairly comprehensive

Image from

The desired output is

The node has joined the cluster

To check the cluster is up and running run the following on k8master

kubectl get nodes

This will output similar to the following

NAME       STATUS      ROLES                  AGE     VERSIONk8master   NotReady    control-plane,master   2d23h   v1.21.1k8node1    NotReady    <none>                 2d23h   v1.21.1k8node2    NotReady    <none>                 2d23h   v1.21.1

The NotReady has occurred because we need to deploy a Container Network Interface (CNI) based Pod network

Add-ons like calico, kube-router and weave-net exist to do this.

As the name suggests, pod network add-ons allow pods to communicate each other.

For this tutorial we install Calico

Calico is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports a broad range of platforms including Kubernetes, OpenShift, Docker EE, OpenStack, and bare metal services.

Install Calico on Master Node

Run the following command ONLY on the k8master node

kubectl apply -f

The following output will be shown

Image from

To check the cluster is up and running as READY run the following on k8master

kubectl get nodes

This will output the following

NAME       STATUS   ROLES                  AGE     VERSIONk8master   Ready    control-plane,master   2d23h   v1.21.1k8node1    Ready    <none>                 2d23h   v1.21.1k8node2    Ready    <none>                 2d23h   v1.21.1

To confirm what is installed run on the k8master

kubectl get pods --all-namespaces

Should output something similar to this

Congratulations this is a working Kubernetes cluster

What have we done?

  • Build 3 Ubuntu servers
  • Setup the prereqs
  • Installed the Kubernetes Tools
  • Initialised a Master Node
  • Joined 2 worker nodes to the master node
  • Installed Calico so the nodes can talk to each other in the background
  • Confirmed all nodes are marked as READY

How do I use this?

There is a difference between installing something and using it. Kubernetes is very command-line driven making a lot of use of the kubectl command.  From this point forward unless directed otherwise the only place to run the commands is from the k8master server on the cluster.


The rest of the tutorial is very command-line driven, Kubernetes however isn’t without its IDE interfaces. The one I’ve found the most luck with is Lens

Lens | The Kubernetes IDE
Lens IDE for Kubernetes. The only system you’ll ever need to take control of your Kubernetes clusters. It’s open source and free. Download it today!

I’m not going to spend too much time other than let you know it exists and there is a good guide for it here.

I’ve used Lens to see Errors and delete while testing a lot.

Test the K8 Cluster

To test the cluster running up a simple container such as NginX would work really well. Unless otherwise specified (as I will do later) images are pulled from Docker Hub

There are a lot of concepts such as Pods, Deployments and Services which will be used in the remainder of this post. I will put links at the end of the post to learn more about these concepts.

From the command line to confirm the image can be download and spins up run the following command:

kubectl create deployment nginx-web --image=nginx

This command creates a K8s deployment called nginx-web using the image nginx from the Docker Hub

A Kubernetes deployment is a resource object in Kubernetes that provides declarative updates to applications. A deployment allows you to describe an application’s life cycle, such as which images to use for the app, the number of pods there should be, and the way in which they should be updated.

When successful the output will be

deployment.apps/nginx-web created

The kubectl get command is used to get basic information about the deployment

kubectl get deployments.apps

NAME        READY   UP-TO-DATE   AVAILABLE   AGEnginx-web   1/1     1            1           41s

This tells me that there is 1 container running and its available

For some additional information run

kubectl get deployments.apps  -o wide

It’s also possible to view the created pod

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.

A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod’s contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific “logical host”: it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.

Run the command

kubectl get  pods

Results in

NAME                         READY   STATUS    RESTARTS   AGEnginx-web-7748f7f978-nk8b2   1/1     Running   0          2m50s

These two commands show that the kubectl create command did it job and created a single instance of Nginx as a container and started running it.

Scale it up

One of the reasons for using Kubernetes is scaling to meet the need of the deployed services. From the command line, this is done using the command to scale up from 1 Nginx node to 4

kubectl scale --replicas=4 deployment nginx-web

Run the kubectl get command again

kubectl get deployments.apps nginx-web

Will display

NAME        READY   UP-TO-DATE   AVAILABLE   AGEnginx-web   4/4     4            4           13m

There are now 4 NGINX nodes running and available

Final test using HTTPD and Port 80

Deploy the Apache container

kubectl run http-web --image=httpd --port=80

Use kubectl expose to expose the container port 80 to Kubernetes in much the same way the docker run -p 80:80 might work.

kubectl expose pod http-web --name=http-service --port=80 --type=NodePort

NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.

Use the kubectl get service command to see the underlying service which links the container to the k8s networking.

kubectl get service http-service

Because I set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).

Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its .spec.ports[*].nodePort field.

NAME         TYPE     CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGEhttp-service NodePort <none>        80:31098/TCP   10s

Run the wide version of the command

kubectl get pods http-web -o wide 

http-web   1/1     Running   0      59m   k8node2

Access to test apache is up can be tested using (your internal port and node may be different)

curl http://k8node2:31098

should result in

<html><body><h1>It works!</h1></body></html>

What have we done?

  • Spun an Nginx
  • Scaled Nginx from 1 to 4 containers
  • Spun up Apache
  • Exposed the Apace Internal container port to Kubernetes
  • Checked the Apache server was accessible from the k8master node.

What you can’t do yet.

See the container from the internet

With all the above tests you may be asking the question, why didn’t I just open the apache server in my web browser, why did I need to use Curl?

A valid question, out of the box Kubernetes install doesn’t communicate outside of the Kubernetes cluster. It needs a couple of other additions which we will install later to do this

YAML Files

Running this all from the command line seems to be very overwhelming.. yes it is, however, most of the config can be done as YAML files which are used with kubectl deploy and kubectl create commands

This allows you to store the configs in git and make use of external tools to deploy containers on a K8 cluster.

Getting the N8N Container working on the Kubernetes cluster.

Having run a couple of commands to test that the cluster is working, I would use Lense to remove the POD, Deployments and Services we just created. A good reason to setup and connect to your cluster.

To run the N8N software the deployment will be done using YAML Files, A load Balancer will be installed and Traefik will be used to route the traffic from the outside/Load Balancer IP into the K8s Cluster

Install MetalLb

MetalLB, bare metal load-balancer for Kubernetes

Metallb is installed on the k8master node  and the kubectl apply -f command is run to install from YAML files hosted on github

kubectl apply -f
kubectl apply -f

This will deploy MetalLB to your cluster, under the metallb-system namespace. The components in the manifest are:

  • The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
  • The metallb-system/speaker daemon set. This is the component that speaks the protocol(s) of your choice to make the services reachable.
  • Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.

The installation manifest does not include a configuration file. MetalLB’s components will still start

An example config file is available within the above link

apiVersion: v1kind: ConfigMapmetadata:  namespace: metallb-system  name: configdata: config: |address-pools:- name: default  protocol: layer2  addresses:  -

The important part of the config is the last line


This is a range of IP’s outside of your DHCP ranges which Metallb can use and assign resources to. It does appear to be smart enough to check if the IP is in use as I found out my Jumpbox was in my defined load balancer range and it ignored the Jumpbox IP.

To use this config run

 kubectl create -f metallmconfig.yaml

Confirm the deployment using kubectl

 kubectl -n metallb-system get all

This should result in something similar to

Shows a working Load Balancer

Which looks like this in Lens

TEST Metallb

To test the Load Balancer we will use NGINX again because it’s quick, lightweight and has visible output.

kubectl create deploy nginx --image nginx

kubectl get all

This will output

NAME                        READY   STATUS    RESTARTS   AGEpod/nginx-6799fc88d8-x4z8f  1/1     Running   0          10s

NAME                  TYPE      CLUSTER-IP    EXTERNAL-IP PORT(S)service/kubernetes    ClusterIP                 443/TCPservice/nginx-service NodePort             80:32246/TCP

Expose port 80 as type LoadBalancer (Metallb)

kubectl expose deploy nginx --port 80 --type LoadBalancer

Check this

kubectl get svc

nginx LoadBalancer  80:30342/TCP   17s

The NginX port 80 is now bound to the external Metallb IP

Should be able to access NginX home page at

This works so why bother with Traefik?

This is a fair question, imagine you were hosting 100 websites on your Kubernetes cluster, each one of those would need an external-facing IP address on the load balancer with port 443 or 80 assigned to it.

Traefik will stop the need for that by enabling its routing engine to direct traffic to the correct K8s container.

What have we done?

  • Installed Metallb
  • Configured a range of IP Addresses for Metallb to use
  • Deployed the Metallb YAML config file
  • Installed NginX
  • Exposed port 80
  • Associated the NginX service with the Metallb
  • Confirmed we can access the NginX landing page from an external IP

Install a Traefik Ingress Server

Traefik Documentation

We will install Traefik using Helm, Helm is best described at this point as apt or yum for Kubernetes, it’s a location (public or private) repository of predefined installable K8s packages.

Helm is a tool that streamlines installing and managing Kubernetes applications. Think of it like Apt/Yum/Homebrew for K8S.

Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.

Install Helm and use it.

On Ubuntu helm is a snap package

snap instll helm

Once installed use helm to install the Traefik repository

helm repo add traefik

"traefik" has been added to your repositories

Update the Helm repos

helm repo update

Once complete you will see

Hang tight while we grab the latest from your chart repositories......Successfully got an update from the "traefik" chart repositoryUpdate Complete. ⎈Happy Helming!⎈

List the available helm repos

helm repo list

which will list

NAME    URLtraefik

Search the repo to see the version of traefik available.

helm search repo traefik

Will show

NAME            CHART VERSION APP VERSION  DESCRIPTIONtraefik/traefik 9.19.1        2.4.8        A Traefik based Kubernetes ingress controller

The latest available at time of writing.

It’s possible to dump what is about to be installed to file to see what is happening using

helm show values traefik/traefik > traefikvalues.yaml

I want to edit this file because I want to run N8N on port 5678 (I don’t need to bu this is an example of a method of using Traefik  not on standard ports)

vi traefikvalues.yaml

Scroll down to the ports section (Formatting is off)

ports:  # The name of this one can't be changed as it is used for the readiness and  # liveness probes, but you can adjust its config to your liking traefik:port: 9000# Use hostPort if set.# hostPort: 9000## Use hostIP if set. If not set, Kubernetes will default to, which# means it's listening on all your interfaces and all your IPs. You may want# to set this value if you need traefik to listen on specific interface# only.# hostIP: Override the liveness/readiness port. This is useful to integrate traefik# with an external Load Balancer that performs healthchecks.# healthchecksPort: 9000# Defines whether the port is exposed if service.type is LoadBalancer or# NodePort.## You SHOULD NOT expose the traefik port on production deployments.# If you want to access it from outside of your cluster,# use `kubectl port-forward` or create a secure ingressexpose: false# The exposed port for this serviceexposedPort: 9000# The port protocol (TCP/UDP)protocol: TCPweb:port: 8000# hostPort: 8000expose: trueexposedPort: 80# The port protocol (TCP/UDP)protocol: TCP# Use nodeport if set. This is useful if you have configured Traefik in a# LoadBalancer# nodePort: 32080# Port Redirections# Added in 2.2, you can make permanent redirects via entrypoints.# redirectTo: websecurewebsecure:port: 8443# hostPort: 8443expose: trueexposedPort: 443# The port protocol (TCP/UDP)protocol: TCP# nodePort: 32443# Set TLS at the entrypoint#  enabled: false  # this is the name of a TLSOption definition  options: ""  certResolver: ""  domains: []  # - main:  #   sans:  #     -  #     -

After the websecure section add

  n8n:    port: 10000    expose: true    exposedPort: 5678    protocol: TCP

Save and exit the file

What I’ve done here is added another exposed port to Traefik (5678/TCP) which I’ll cover later.

Install Traefik using the edited traefikvalues.yaml

helm install traefik traefik/traefik --values traefikvalues.yaml -n traefik --create-namespace

Check to see if helm installed Traefik as expected

helm list -n traefik

should return

traefik traefik         1               2021-06-09 14:06:26.292094565 +0000 UTC deployed        traefik-9.19.1  2.4.8

Check in Kubernetes that Trefaek has deployed successfully

 kubectl -n traefik get all


NAME                          READY   STATUS    RESTARTS   AGE    pod/traefik-b5cf49d5b-7x9p8   1/1     Running   0          5m50s

service/traefik   LoadBalancer   5678:30068/TCP,80:32371/TCP,443:30185/TCP   5m51s

deployment.apps/traefik   1/1     1            1           5m51s

replicaset.apps/traefik-b5cf49d5b   1         1         1       5m50s

From the output we can see that the service is ready, traefik has bound to the metallb IP and that ports 5678,80 and 443 are exposed externally

Traefik is running

To test the configuration, we will use Traefik to display the Traefik Dashboard “externally” so outside the K8s cluster

Expose the Traefik Dashboard

Dashboard – Traefik
Traefik Documentation

Traefik comes with a dashboard, it’s a read-only service meaning you can’t use it to edit settings.

The Dashboard is running within the K8s cluster already, however, there has been no configuration to provide access to it through the Traefik router

We can deploy the dashboard configuration so it is visible externally  e using a YAML file and kubectl create

vi dashboard.yaml

apiVersion: IngressRoutemetadata:  name: dashboardspec:  entryPoints:    - web  routes:   - match: Host(`traefik.lan`)      kind: Rule      services:        - name: api@internal          kind: TraefikService

What is this doing?

apiVersion: IngressRoute

Using the traefik APIan IngressRoute will be setup, this is a route from the outside Traefik port to the internal K8s service, which in this example is the Traefik dashboard

metadata:  name: dashboard

The Service will be listed under the name dashboard

entryPoints:  - web

This refers to the entry points we added n8n to in the traefikvalues.yaml file earlier. the entry point web is on port 80/TCP.

 routes:    - match: Host(`traefik.lan`)      kind: Rule

When traffic is found on port 80 if it’s directed to traefik.lan then do something with it, otherwise, drop it. you can use whatever DNS name you want here

Important note: that is not a ‘ above it is ` a very different thing and if you use ‘ this will fail

      services:        - name: api@internal          kind: TraefikService

When traffic comes in on port 80/TCP bound for traefik.lan direct it to the Traefik service API

Deploy the dashboard

kubectl create -f dashboard.yaml

Then run the following to make sure things have worked

kubectl describe ingressroute dashboard

This will return

Name:         dashboardNamespace:    defaultLabels:       <none>Annotations:  <none>API Version:         IngressRouteMetadata:  Creation Timestamp:  2021-06-10T13:46:31Z  Generation:          1  Managed Fields:    API Version:    Fields Type:  FieldsV1    fieldsV1:      f:spec:        .:        f:entryPoints:        f:routes:    Manager:         kubectl-create    Operation:       Update    Time:            2021-06-10T13:46:31Z  Resource Version:  378794  UID:               27e45c27-e6ae-4385-8366-6b590998a4cdSpec:  Entry Points:    web  Routes:    Kind:   Rule    Match:  Host(`traefik.lan`)    Services:      Kind:  TraefikService      Name:  api@internalEvents:      <none>

As the last few lines display

Spec:  Entry Points:    web  Routes:    Kind:   Rule    Match:  Host(`traefik.lan`)    Services:      Kind:  TraefikService      Name:  api@internalEvents:      <none>

The config has deployed accordingly.

At this point, you’ll either need to update your local DNS if you run such a thing to point traefik.lan to the LoadBalancer IP in this example, or..

edit your /etc/hosts file (on Linux or Mac) to have an entry    traefik.lan

Test it out at


Will display

Have a click around to see what you can see.

What have we done?

  • Installed Helm
  • Added a Helm repository for Traefik
  • Downloaded and edited the value file
  • Installed Traefik
  • Checked it was bound to the load balancer IP
  • Run a Traefick config to expose the dashboard externally
  • Confirmed it worked.

Deploy the N8N Application

All the fundamental building blocks are now in place to deploy the N8N app I have running on my private ProGet Docker repo to the Kubernetes cluster.

There is one last step which is setting up a secret because Kubernetes will pull a container down from Docker Hub by default in this example I need to let it know where to pull the image down from and that location is username/password protected.

Set up a Secret

Will run this from the command line, in future posts I’ll cover how to run as a YAML file.

The command line, run on k8master takes the following format

kubectl create secret docker-registry regcred --docker-server= --docker-username= --docker-password= --docker-email=

Run the following

kubectl create secret docker-registry progetcred --docker-server= --docker-username=david --docker-password=MyStupidPassword123

Check that the secret is in place

kubectl get secret progetcred --output=yaml


apiVersion: v1data:.dockerconfigjson:eyJhdXRocyI6eyJodHRwOi8vMTkyLjE2OC44Ni4yMjM6ODA4MSI6eyJ1c2VybmFtZSI6ImRhdmlkIiwicGFzc3dvcmQiOiI1NG01dW5HPyIsImF1dGgiOiJaR0YyYVdRNk5UUnROWFZ1Uno4PSJ9fX0=kind: Secretmetadata:creationTimestamp: "2021-06-08T12:45:09Z"name: progetcrednamespace: defaultresourceVersion: "103858"uid: c21a7e07-ec27-4d06-9064-357c220

If you want to check that the right password is stored, run

kubectl get secret progercred --output="jsonpath={.data..dockerconfigjson}" | base64 --decode

This will output.


With a secret for ProGet in place time to deploy the N8N application.

What have we done?

  • Installed a secret
  • Confirmed the secret was available
  • Confirmed the secret is visible

Deploy N8N Image from ProGet server

We need 2 files a deployment file and an ingress file, actually this can be the same file just with a — between the two read I’ve kept them separate

n8n-deployment file

apiVersion: apps/v1kind: Deploymentmetadata:  labels:    run: n8n-server  name: n8n-deploymentspec:  replicas: 1  selector:    matchLabels:      run: n8n-deployment  template:    metadata:      labels:        run: n8n-deployment    spec:      containers:      - image:        name: n8n-server      imagePullSecrets:      - name: progercred

What does this mean?

apiVersion: apps/v1kind: Deployment

Let K8s know this is a deployment

spec:  replicas: 1

In the first instance only a single replica

    spec:      containers:      - image:        name: n8n-server      imagePullSecrets:      - name: progercred

Pull down the image from the local repo using the creds we just set up.

Deploy using

kubectl create -f n8n-traefik-deployment.yaml

Check using

kubectl get deployments.apps n8n-deployment

kubectl describe deployments.apps n8n-deployment

Both will show the image deployed and the number of replicas (1)

Expose the container port to Kubernetes

kubectl expose deploy n8n-deployment --port 5678 --type LoadBalancer

Note: I’m pretty sure this needs to be added to the deployment file

Next, we create the ingress for Traefik


---apiVersion: IngressRoutemetadata:   name: n8n-traefik  namespace: defaultspec:  entrypoints:    - n8n  routes:    - match: Host(`n8n.lan`)      kind: Rule      services:        - name: n8n-deployment          port: 5678

What does this mean?

apiVersion: IngressRoute

This tells Kubernetes we are interacting with Traefik

  name: n8n-traefik  namespace: default

This sets the name of the object we will be working with

  entrypoints:    - n8n

This goes back to the traefikvalues.yaml file we edited to add an entry point for n8n on port 5678/TCP

  routes:    - match: Host(`n8n.lan`)      kind: Rule      services:        - name: n8n-deployment          port: 5678

This tells Traefik if something hits port 5678 and is addressed n8n.lan then forward it onto  the n8n-deployment container we just deployed on its exposed port 5678

Again please use ` not ‘ in the above they are different

Deploy this

kubectl create -f n8n-traefik-ingress.yaml

Check its applied

 kubectl get ingressroute


NAME          AGEdashboard     25hn8n-traefik   24h

I can see the dashboard ingress rule still

Check that the rule is correct

kubectl describe ingressroute n8n-traefik

ends with

Spec:  Entrypoints:    n8n  Routes:   Kind:   Rule   Match:  Host(`n8n.lan`)   Services:      Name:  n8n-deployment      Port:  5678Events:      <none>

The N8N app should now be deployed

Test N8N install

At this point either add or your preferred DNS to your DNS server or to the local PC’s hosts file

and go to


and you should see

The Traefik Dashboard



Click on Explore on the routers tab

This will display all the active routes

Click on the N8N route to see what is happening under the hood

This can be handy when troubleshooting as errors are displayed here


On Monday I’d never touched Kubernetes, by Friday I’ve got my own app running off my own docker repo from a docker image I created using a Jenkins pipeline.

It’s been hard though, there is a lot of wrong, poorly worded and downright terrible documentation around Kubernetes out there and I know I’ve just scratched the surface. This post is about the fact that the above can be done.

Moving forward I’ll be investigating

  • https
  • secrets
  • certificate management
  • security
  • scaling

Over the next week, I’ll look at the above and see if i can get Rocket chat and a couple of other Docker containers i use running scaled on a Kubernetes cluster.

The end result of this is to have the knowledge to migrate this off my own K8s Cluster onto one running on AWS…

Notes – Why this setup?

When you post to the internet every time there will be someone available to tell you why you are wrong and how to do the above better using their method or knowledge.

So why did I do this all, this way, and not the method they chose?

If you scroll back up to the top, I had a single purpose for this post and one which I think a lot of sysadmins and people, in general, would want the first time they used Kubernetes.

I wanted to spin up a server, as a basic cluster, using commands which could be used on other platforms (AWS, Google etc) and get my app, in this case, N8N working.

At this point I’m not concerned about scalability, security, pros or cons of MicroK8s, I wanted the as vanilla experience as I thought I could get and have my container app work.

After posting on Reddit on specific user was very adamant on thier feedback.

I thought I’d address some of the comments here

You should have used microk8s, k0s, or k3s. Or kubespray, I guess. This all falls apart if the master dies. You learned a lot, but very little about how to make k8s itself resilient or scalable.

This is correct I could have used any one of these, but I chose not to, in fact, my experience with MicroK8s was so depressing I nearly gave up on the whole thing, it at the time of writing is unstable IMHO. Also, any service using its own set of”helpful commands”

microk8s.config    microk8s.docker    microk8s.inspect   microk8s.kubectl   microk8s.start     microk8s.stopmicrok8s.disable   microk8s.enable    microk8s.istioctl  microk8s.reset     microk8s.status

Is a layer of learning I didn’t need right now.

40gb disk is way too much unless you plan to add longhorn. With VMs, you could also add additional drives and use OpenEBS.

Sure, it probably is, its a virtual machine, it’s not actually using all the 40gb and again, this was explained to be a Dev setup, not a production one multiple times. I also mentioned that in a future post I’d be looking at Dynamic storage.

Host files are bad. It’s 2021. Set up PiHole or something and use ExternalDNS (I still use BIND+dhcpd because it’s worked for 15 years, takes almost no resources, and integrated with everything). This leads to “create a k8s service, get an automatic DNS record you can use with Traefik/Istio”

100% yes, and I do, however, someone reading this should not be stopped learning something because they don’t have a DNS setup at home. I checked and I did mention BOTH options in the posts.

Why did you size your VMs at 2vCPU/2GB/20GB? No idea (2GB is astoundingly low for a kubeapi server, in general).

It’s a Dev environment, it doesn’t need to consume 4Gb ram, it will run in 2, and for some people, the lack of hardware resource is a barrier to entry, they might have a machine with 8Gb ram, but not with 16Gb ram. It’s irrelevant as it will be torn down because, it’s a dev environment NOT A PRODUCTION one, designed to run a single docker container.

Why one master and two workers instead of 1+1 or 1+3 or HA k3s/microk8s/k0s? No idea. Why didn’t you add tolerations to schedule on the master also? No idea.

Once again, not the point of this blog post. I wanted a simple cluster, a master and 2 nodes, something almost the entire google search recommends when setting up this type of test environment. This is not the production part of the journey this is not a consideration, scalability, masters falling over are not the question here, the question is How do I get my N8N container to work on a K8s cluster.

The choice of CNI is really significant. Why calico instead of flannel or weave or a combination? No idea.

And yet not one of the myriad of posts I trawled through to get this all working mentioned flannel or weave, every single one suggested using calico. I’m in no doubt there are factors in choosing the CNI, however again, for this post, there were not.

What are you using for PVCs? Anything? Why not Longhorn or OpenEBS or Ceph or NFS? No idea.

Because I don’t need to, that as the post says many times is not where this post is going, the post outlines what it is and what the next steps are, however, the comments so far seem to have bypassed this…

Why Traefik for ingress instead of NGINX or Istio or Envoy/Contour or using Istio labels with Traefik or…? No idea.

This one I can answer, I could not for the life of me get the NginX Ingress controller to work, and having googled Traefik seemed to be the most common and well-documented ingress controller out there.

This article spends an incredible amount of time in how to get k8s running from “nothing” instead of using a pre-packaged distribution where someone has answered some basic questions, because there are enough other ones to make besides that, and spends no time whatsoever talking about the actual things which make k8s worth using over Docker or the challenges you may face.

By choice, I didn’t want to use any prepackaged system, my personal journey I’ve found that most of them take me down irregular command sets, I have the ability to set up a simple K8s cluster using Kubeadmin, the most native of Kubernetes deployments. So I made this choice.

K8S lets you scale. Ok. It lets you scale by dynamically ramping up sets. But it also lets you scale by having different deployment strategies/rollout strategies, integration for shared storage which can be claimed across nodes, resiliency for masters, filtering on node annotations, etc.

Quite literally in the thoughts section above, I wrote

Moving forward I’ll be investigating

  • secrets
  • certificate management
  • security
  • scaling

I had no need to scale N8N, I didn’t want to scale N8N, however, I do want to scale my Rocket chat server and build out a simple prod environment at home where I will be covering things like scaling, dynamic nfs, using HCP for secrets

It’s a bit of a puff piece and anyone who tries to follow this is going to watch it fall over the second they try to scale it. May as well be “the journey from libvirt to openstack” where the author recommends devstack or “the journey from docker to kubernetes” with minikube. microk8s/k3s/k0s don’t exist for no reason. They exist because a bunch of people like me invest a ton of engineering effort to let end-users solve real problems instead of fighting nitty-gritty architecture stuff that they probably never need to know and will probably never benefit from knowing, because they’ll have enough to worry about in the other levels of the stack.

And here we have it.. These are comments left by someone heavily invested in Kubernetes, who unfortunately was, rather than willing to read and understand the purpose of this article and reach out to provide direction, pushing his brand of soap to the masses.

Everything asked here was relevant good questions and within the scope of future posts.

The journey for me however is always.

  • I have something I want to get to work (N8N running on Kubernetes)
  • Can I do this?
  • If it works, how do I secure it?
  • How do I scale it?
  • How do I automate it?
  • Document the process…

Further Reading

zero to code – Tech Blog Posts – David Field
Linux, tech, gadgets, chromebook, chromeos, redhat, ansible, technology, android, tweaks, software, tools

MetalLB, bare metal load-balancer for Kubernetes

Install Traefik – Traefik
Traefik Documentation

How To Install and Use Docker on Ubuntu 20.04 | DigitalOcean
Docker is an application that simplifies the process of managing application processes in containers. In this tutorial, you’ll install and use Docker Community Edition (CE) on Ubuntu 20.04. You’ll install Docker itself, work with containers and images, and push an image to a Docker Repository.

About Calico
The value of using Calico for networking and network security for workloads and hosts.

Production-Grade Container Orchestration

A Deployment provides declarative updates for Pods and ReplicaSets.You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments …

What is Helm in Kubernetes?
A step-by-step demo showing how to deploy your JFrog Artifactory HA cluster in Kubernetes using Helm Charts: 1. Create Helm Chart Repository 2. Setup Helm Client 3. View Helm Chart in Artifactory 4. Install Artifactory HA (Primary & Secondary Nodes) 5. View Nodes in Kubernetes Pods 6. Get New Artif…

Dashboard – Traefik
Traefik Documentation

By davidfield

Tech Entusiast