From Zero to Code: Moving from Docker to Kubernetes

From Zero to Code: Moving from Docker to Kubernetes

In the last few posts in this series, I've gone through setting up a private Docker repository and using Jenkins with Docker Plugins to Build up from nothing a base image, patching it and creating/testing an N8N Docker image using a Jenkins pipeline.

zero to code - Tech Blog Posts - David Field
Linux, tech, gadgets, chromebook, chromeos, redhat, ansible, technology, android, tweaks, software, tools

In this next post, I'll be covering how to

  • Manually build a 3 Node (Master and 2 nodes) Kubernetes Cluster
  • Deploy N8N from my Proget Repo
  • Install Metallb and Traefik
  • Access N8N from outside the Kubernetes cluster

You will need

To run with this blog you'll need

  • 3 Ubuntu 20.04 Servers - 2vCPU, 2Gb Ram, 40 Gb HDD
  • SSH Client
  • Ability to Access the internet from these servers
  • Patience
  • A ProGet Repo setup as per the previous tutorial (or something you can run from Docker.
  • N8n Image on the proget tutorial

Don't worry if you don't have the ProGet/N8N combination, you can use the N8N Image from Docker

A disclaimer of sorts

I'm learning some of this at the same time I'm writing these posts, the purpose of these posts is to help me remember what I did in 6 months time so I can come back to them as a reference. I make them public because they might help someone.

All the information is out there on the internet, and I'll include all the links at the end I've used to get this working. However, I appreciate there may be a better method to get this working. If there is drop me a line on the discord server above and I'll happily learn and update the post if it works.

During this post, I've not ventured into the realms of HTTPS, that will be the next post. the purpose of this post is to build a base from which to start doing other things like getting Certs from HCP Vault, Dynamic NFS for persistent storage and Using Jenkins to deploy to the K8s Cluster.

Everything has a beginning a middle and an end, let's start at the beginning.

Install the Kubernetes Cluster

I've chosen to use Ubuntu 20.04 for this because it's well documented, and works, the setup will have a docker back end for containers (yes I know its not going to be supported soon person on Reddit who will pick up on this and have a moan)

Installing a K8 1 Master and 2 Node cluster on Ubuntu 20.04

3 Machines need to be built on whatever platform you're using, VMware, Virtualbox, Proxmod, HyperV etc

Use the following hostnames and make a note of your IP Addresses they should be static addresses or a permanent DHCP Lease

  • Machine 1 (Ubuntu 20.04 LTS Server) – k8master1 –
  • Machine 2 (Ubuntu 20.04 LTS Server) – k8node1 –
  • Machine 3 (Ubuntu 20.04 LTS Server) – k8node2 –
  • 2vCPU, 2Gb Ram, 40 Gb HDD

On all three machines run

sudo apt update
sudo apt dist-upgrade -y

Setup the Server Prerequisites

Set the hostname for each server

Run this command on master node

sudo hostnamectl set-hostname "k8master" 

Run this command on node 1

sudo hostnamectl set-hostname "k8node1"  

Run this command on node 2

sudo hostnamectl set-hostname "k8node2"

On each of the hosts edit the hosts files

sudo nano /etc/hosts

Edit the hosts file

Add the following at the end using YOUR IP ADDRESSES    k8master    k8node1    k8node2
If you run your own homelab  DNS then you can update the DNS accordingly

Disable Swap on each machine

Kubernetes needs swap disabled run the following on all 3 of the machines

sudo vi /etc/fstab

Towards the end of this file, there will be a line that reads similar to this

swap.img    none     swap    sw  0   0

Put a hash in front of the line like so

#swap.img    none     swap    sw  0   0

Save and Exit the file

This will stop the swap being used after the next reboot, to stop it being used now type

sudo swapoff -a

Setup IP Forwarding

Ubuntu needs IP forwarding enabled on all 3 nodes

sudo vi /etc/sysctl.conf

Add this line to the end of the file


Save and Exit the file

Run the following

sudo sysctl -p

This should return

net.ipv4.ip_forward = 1

Install Docker

Kubernetes needs an underlying container system to manage, for this example Docker will be used.

On each of the 3 machines run the following commands

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] focal stable"
sudo apt update
sudo apt install docker-ce
sudo systemctl enable docker
sudo systemctl restart docker
sudo systemctl status docker

This should return

docker.service - Docker Application Container Engine
 Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
 Active: active (running) since Tue 2020-05-19 17:00:41 UTC; 17s ago
TriggeredBy: ● docker.socket
   Main PID: 24321 (dockerd)
  Tasks: 8
 Memory: 46.4M
 CGroup: /system.slice/docker.service
         └─24321 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Docker is installed.

Use docker as a non root user

sudo usermod -aG docker ${USER}

Install Kubernetes Tools

The following tools need to be installed to get Kubernetes up and running.

kubectl , kubelet and kubeadm

Run the following commands on all 3 servers

sudo apt install -y apt-transport-https curl 

curl -s | sudo apt-key add 

sudo apt-add-repository "deb kubernetes-xenial main" 

Update to add the new repository

sudo apt update 

Install the tools

sudo apt install -y kubelet kubeadm kubectl


Initialise Kubernetes on the Master Node

On the k8master node ONLY the Kubernetes cluster needs to be initialised with the following command

sudo kubeadm init

This will result in a lot of text

Image from

With the build being successful the following 3 commands need to be run NOT as root and NOT using sudo

mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config ****
sudo chown $(id -u):$(id -g) $HOME/.kube/config

With the Master node initialised the 2 worker nodes need to join the cluster which is outlined in the last output line.

Run this on k8node1 and k8node2

Your token number will be different so don't just copy and paste this
sudo kubeadm join --token b4sfnc.53ifyuncy017cnqq --discovery-token-ca-cert-hash sha256:5078c5b151bf776c7d2395cdae08080faa6f82973b989d29caaa4d58c28d0e4e

The resulting output should be fairly comprehensive

Image from

The desired output is

The node has joined the cluster

To check the cluster is up and running run the following on k8master

kubectl get nodes

This will output similar to the following

NAME       STATUS      ROLES                  AGE     VERSION
k8master   NotReady    control-plane,master   2d23h   v1.21.1
k8node1    NotReady    <none>                 2d23h   v1.21.1
k8node2    NotReady    <none>                 2d23h   v1.21.1

The NotReady has occurred because we need to deploy a Container Network Interface (CNI) based Pod network

Add-ons like calico, kube-router and weave-net exist to do this.

As the name suggests, pod network add-ons allow pods to communicate each other.

For this tutorial we install Calico

Calico is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports a broad range of platforms including Kubernetes, OpenShift, Docker EE, OpenStack, and bare metal services.

Install Calico on Master Node

Run the following command ONLY on the k8master node

kubectl apply -f

The following output will be shown

Image from

To check the cluster is up and running as READY run the following on k8master

kubectl get nodes

This will output the following

NAME       STATUS   ROLES                  AGE     VERSION
k8master   Ready    control-plane,master   2d23h   v1.21.1
k8node1    Ready    <none>                 2d23h   v1.21.1
k8node2    Ready    <none>                 2d23h   v1.21.1

To confirm what is installed run on the k8master

kubectl get pods --all-namespaces

Should output something similar to this

Congratulations this is a working Kubernetes cluster

What have we done?

  • Build 3 Ubuntu servers
  • Setup the prereqs
  • Installed the Kubernetes Tools
  • Initialised a Master Node
  • Joined 2 worker nodes to the master node
  • Installed Calico so the nodes can talk to each other in the background
  • Confirmed all nodes are marked as READY

How do I use this?

There is a difference between installing something and using it. Kubernetes is very command-line driven making a lot of use of the kubectl command.  From this point forward unless directed otherwise the only place to run the commands is from the k8master server on the cluster.


The rest of the tutorial is very command-line driven, Kubernetes however isn't without its IDE interfaces. The one I've found the most luck with is Lens

Lens | The Kubernetes IDE
Lens IDE for Kubernetes. The only system you’ll ever need to take control of your Kubernetes clusters. It’s open source and free. Download it today!

I'm not going to spend too much time other than let you know it exists and there is a good guide for it here.

I've used Lens to see Errors and delete while testing a lot.

Test the K8 Cluster

To test the cluster running up a simple container such as NginX would work really well. Unless otherwise specified (as I will do later) images are pulled from Docker Hub

There are a lot of concepts such as Pods, Deployments and Services which will be used in the remainder of this post. I will put links at the end of the post to learn more about these concepts.

From the command line to confirm the image can be download and spins up run the following command:

kubectl create deployment nginx-web --image=nginx

This command creates a K8s deployment called nginx-web using the image nginx from the Docker Hub

A Kubernetes deployment is a resource object in Kubernetes that provides declarative updates to applications. A deployment allows you to describe an application's life cycle, such as which images to use for the app, the number of pods there should be, and the way in which they should be updated.

When successful the output will be

deployment.apps/nginx-web created

The kubectl get command is used to get basic information about the deployment

kubectl get deployments.apps
nginx-web   1/1     1            1           41s

This tells me that there is 1 container running and its available

For some additional information run

kubectl get deployments.apps  -o wide

It's also possible to view the created pod

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.

Run the command

kubectl get  pods

Results in

NAME                         READY   STATUS    RESTARTS   AGE
nginx-web-7748f7f978-nk8b2   1/1     Running   0          2m50s

These two commands show that the kubectl create command did it job and created a single instance of Nginx as a container and started running it.

Scale it up

One of the reasons for using Kubernetes is scaling to meet the need of the deployed services. From the command line, this is done using the command to scale up from 1 Nginx node to 4

kubectl scale --replicas=4 deployment nginx-web

Run the kubectl get command again

kubectl get deployments.apps nginx-web

Will display

nginx-web   4/4     4            4           13m

There are now 4 NGINX nodes running and available

Final test using HTTPD and Port 80

Deploy the Apache container

kubectl run http-web --image=httpd --port=80

Use kubectl expose to expose the container port 80 to Kubernetes in much the same way the docker run -p 80:80 might work.

kubectl expose pod http-web --name=http-service --port=80 --type=NodePort
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.

Use the kubectl get service command to see the underlying service which links the container to the k8s networking.

kubectl get service http-service

Because I set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).

Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its .spec.ports[*].nodePort field.

http-service NodePort <none>        80:31098/TCP   10s

Run the wide version of the command

kubectl get pods http-web -o wide 
http-web   1/1     Running   0      59m   k8node2

Access to test apache is up can be tested using (your internal port and node may be different)

curl http://k8node2:31098

should result in

<html><body><h1>It works!</h1></body></html>

What have we done?

  • Spun an Nginx
  • Scaled Nginx from 1 to 4 containers
  • Spun up Apache
  • Exposed the Apace Internal container port to Kubernetes
  • Checked the Apache server was accessible from the k8master node.

What you can't do yet.

See the container from the internet

With all the above tests you may be asking the question, why didn't I just open the apache server in my web browser, why did I need to use Curl?

A valid question, out of the box Kubernetes install doesn't communicate outside of the Kubernetes cluster. It needs a couple of other additions which we will install later to do this

YAML Files

Running this all from the command line seems to be very overwhelming.. yes it is, however, most of the config can be done as YAML files which are used with kubectl deploy and kubectl create commands

This allows you to store the configs in git and make use of external tools to deploy containers on a K8 cluster.

Getting the N8N Container working on the Kubernetes cluster.

Having run a couple of commands to test that the cluster is working, I would use Lense to remove the POD, Deployments and Services we just created. A good reason to setup and connect to your cluster.

To run the N8N software the deployment will be done using YAML Files, A load Balancer will be installed and Traefik will be used to route the traffic from the outside/Load Balancer IP into the K8s Cluster

Install MetalLb

MetalLB, bare metal load-balancer for Kubernetes

Metallb is installed on the k8master node  and the kubectl appl -f command is run to install from YAML files hosted on github

kubectl apply -f
kubectl apply -f

This will deploy MetalLB to your cluster, under the metallb-system namespace. The components in the manifest are:

  • The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
  • The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
  • Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.

The installation manifest does not include a configuration file. MetalLB’s components will still start

An example config file is available within the above link

apiVersion: v1
kind: ConfigMap
  namespace: metallb-system
  name: config
 config: |
- name: default
  protocol: layer2

The important part of the config is the last line


This is a range of IP's outside of your DHCP ranges which Metallb can use and assign resources to. It does appear to be smart enough to check if the IP is in use as I found out my Jumpbox was in my defined load balancer range and it ignored the Jumpbox IP.

To use this config run

 kubectl create -f metallmconfig.yaml

Confirm the deployment using kubectl

 kubectl -n metallb-system get all

This should result in something similar to

Shows a working Load Balancer

Which looks like this in Lens

TEST Metallb

To test the Load Balancer we will use NGINX again because its quick, light weight and has visible output.

kubectl create deploy nginx --image nginx
kubectl get all

This will output

NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-6799fc88d8-x4z8f  1/1     Running   0          10s
NAME                  TYPE      CLUSTER-IP    EXTERNAL-IP PORT(S)
service/kubernetes    ClusterIP                 443/TCP
service/nginx-service NodePort             80:32246/TCP

Expose port 80 as type LoadBalancer (Metallb)

kubectl expose deploy nginx --port 80 --type LoadBalancer

Check this

kubectl get svc
nginx LoadBalancer  80:30342/TCP   17s

The NginX port 80 is now bound to the external Metallb IP

Should be able to access NginX home page at

This works so why bother with Traefik?

This is a fair question, imagine you were hosting 100 websites on your Kubernetes cluster, each one of those would need an external-facing IP address on the load balancer with port 443 or 80 assigned to it.

Traefik will stop the need for that by enabling its routing engine to direct traffic to the correct K8s container.

What have we done?

  • Installed Metallb
  • Configured a range of IP Addresses for Metallb to use
  • Deployed the Metallb YAML config file
  • Installed NginX
  • Exposed port 80
  • Associated the NginX service with the Metallb
  • Confirmed we can access the NginX landing page from an external IP

Install a Traefik Ingress Server

Traefik Documentation

We will install Traefik using Helm, Helm is best described at this point as apt or yum for Kubernetes, it's a location (public or private) repository of predefined installable K8s packages.

Helm is a tool that streamlines installing and managing Kubernetes applications. Think of it like Apt/Yum/Homebrew for K8S.

Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.

Install Helm and use it.

On Ubuntu helm is a snap package

snap instll helm

Once installed use helm to install the Traefik repository

helm repo add traefik
"traefik" has been added to your repositories

Update the Helm repos

helm repo update

Once complete you will see

Hang tight while we grab the latest from your chart repositories......Successfully got an update from the "traefik" chart repositoryUpdate Complete. ⎈Happy Helming!⎈

List the available helm repos

helm repo list

which will list


Search the repo to see the version of traefik available.

helm search repo traefik

Will show

traefik/traefik 9.19.1        2.4.8        A Traefik based Kubernetes ingress controller

The latest available at time of writing.

It's possible to dump what is about to be installed to file to see what is happening using

helm show values traefik/traefik > traefikvalues.yaml

I want to edit this file because I want to run N8N on port 5678 (I don't need to bu this is an example of a method of using Traefik  not on standard ports)

vi traefikvalues.yaml

Scroll down to the ports section (Formatting is off)

  # The name of this one can't be changed as it is used for the readiness and
  # liveness probes, but you can adjust its config to your liking
port: 9000
# Use hostPort if set.
# hostPort: 9000
# Use hostIP if set. If not set, Kubernetes will default to, which
# means it's listening on all your interfaces and all your IPs. You may want
# to set this value if you need traefik to listen on specific interface
# only.
# hostIP:

# Override the liveness/readiness port. This is useful to integrate traefik
# with an external Load Balancer that performs healthchecks.
# healthchecksPort: 9000

# Defines whether the port is exposed if service.type is LoadBalancer or
# NodePort.
# You SHOULD NOT expose the traefik port on production deployments.
# If you want to access it from outside of your cluster,
# use `kubectl port-forward` or create a secure ingress
expose: false
# The exposed port for this service
exposedPort: 9000
# The port protocol (TCP/UDP)
protocol: TCP

port: 8000
# hostPort: 8000
expose: true
exposedPort: 80
# The port protocol (TCP/UDP)
protocol: TCP
# Use nodeport if set. This is useful if you have configured Traefik in a
# LoadBalancer
# nodePort: 32080
# Port Redirections
# Added in 2.2, you can make permanent redirects via entrypoints.
# redirectTo: websecure

port: 8443
# hostPort: 8443
expose: true
exposedPort: 443
# The port protocol (TCP/UDP)
protocol: TCP
# nodePort: 32443
# Set TLS at the entrypoint
  enabled: false
  # this is the name of a TLSOption definition
  options: ""
  certResolver: ""
  domains: []
  # - main:
  #   sans:
  #     -
  #     -

After the websecure section add

    port: 10000
    expose: true
    exposedPort: 5678
    protocol: TCP

Save and exit the file

What I've done here is added another exposed port to Traefik (5678/TCP) which I'll cover later.

Install Traefik using the edited traefikvalues.yaml

helm install traefik traefik/traefik --values traefikvalues.yaml -n traefik --create-namespace

Check to see if helm installed Traefik as expected

helm list -n traefik

should return

traefik traefik         1               2021-06-09 14:06:26.292094565 +0000 UTC deployed        traefik-9.19.1  2.4.8

Check in Kubernetes that Trefaek has deployed successfully

 kubectl -n traefik get all


NAME                          READY   STATUS    RESTARTS   AGE    
pod/traefik-b5cf49d5b-7x9p8   1/1     Running   0          5m50s
service/traefik   LoadBalancer   5678:30068/TCP,80:32371/TCP,443:30185/TCP   5m51s
deployment.apps/traefik   1/1     1            1           5m51s
replicaset.apps/traefik-b5cf49d5b   1         1         1       5m50s

From the output we can see that the service is ready, traefik has bound to the metallb IP and that ports 5678,80 and 443 are exposed externally

Traefik is running

To test the configuration, we will use Traefik to display the Traefik Dashboard "externally" so outside the K8s cluster

Expose the Traefik Dashboard

Dashboard - Traefik
Traefik Documentation

Traefik comes with a dashboard, it's a read-only service meaning you can't use it to edit settings.

The Dashboard is running within the K8s cluster already, however there has been no configuration to provide access to it through the Traefik router

We can deploy the dashboard configuration so it is visible externally  e using a yaml file and kubectl create

vi dashboard.yaml
kind: IngressRoute
  name: dashboard
    - web
   - match: Host(`traefik.lan`)
      kind: Rule
        - name: api@internal
          kind: TraefikService

What is this doing?

kind: IngressRoute

Using the traefik APIan IngressRoute will be setup, this is a route from the outside Traefik port to the internal K8s service, which in this example is the Traefik dashboard

  name: dashboard

The Service will be listed under the name dashboard

  - web

This refers to the entry points we added n8n to in the traefikvalues.yaml file earlier. the entry point web is on port 80/TCP.

    - match: Host(`traefik.lan`)
      kind: Rule

When traffic is found on port 80 if it's directed to traefik.lan then do something with it, otherwise, drop it. you can use whatever DNS name you want here

Important note: that is not a ' above it is ` a very different thing and if you use ' this will fail

        - name: api@internal
          kind: TraefikService

When traffic comes in on port 80/TCP bound for traefik.lan direct it to the Traefik service API

Deploy the dashboard

kubectl create -f dashboard.yaml

Then run the following to make sure things have worked

kubectl describe ingressroute dashboard

This will return

Name:         dashboard
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:
Kind:         IngressRoute
  Creation Timestamp:  2021-06-10T13:46:31Z
  Generation:          1
  Managed Fields:
    API Version:
    Fields Type:  FieldsV1
    Manager:         kubectl-create
    Operation:       Update
    Time:            2021-06-10T13:46:31Z
  Resource Version:  378794
  UID:               27e45c27-e6ae-4385-8366-6b590998a4cd
  Entry Points:
    Kind:   Rule
    Match:  Host(`traefik.lan`)
      Kind:  TraefikService
      Name:  api@internal
Events:      <none>

As the last few lines display

  Entry Points:
    Kind:   Rule
    Match:  Host(`traefik.lan`)
      Kind:  TraefikService
      Name:  api@internal
Events:      <none>

The config has deployed accordingly.

At this point, you'll either need to update your local DNS if you run such a thing to point traefik.lan to the LoadBalancer IP in this example, or..

edit your /etc/hosts file (on Linux or Mac) to have an entry    traefik.lan

Test it out at


Will display

Have a click around to see what you can see.

What have we done?

  • Installed Helm
  • Added a Helm repository for Traefik
  • Downloaded and edited the value file
  • Installed Traefik
  • Checked it was bound to the load balancer IP
  • Run a Traefick config to expose the dashboard externally
  • Confirmed it worked.

Deploy the N8N Application

All the fundamental building blocks are now in place to deploy the N8N app I have running on my private ProGet Docker repo to the Kubernetes cluster.

There is one last step which is setting up a secret because Kubernetes will pull a container down from Docker Hub by default in this example I need to let it know where to pull the image down from and that location is username/password protected.

Set up a Secret

Will run this from the command line, in future posts i'll cover how to run as a YAML file.

The command line, run on k8master takes the following format

kubectl create secret docker-registry regcred --docker-server= --docker-username= --docker-password= --docker-email=

Run the following

kubectl create secret docker-registry progetcred --docker-server= --docker-username=david --docker-password=MyStupidPassword123

Check that the secret is in place

kubectl get secret progetcred --output=yaml


apiVersion: v1
kind: Secret
creationTimestamp: "2021-06-08T12:45:09Z"
name: progetcred
namespace: default
resourceVersion: "103858"
uid: c21a7e07-ec27-4d06-9064-357c220

If you want to check that the right password is stored, run

kubectl get secret progercred --output="jsonpath={.data..dockerconfigjson}" | base64 --decode

This will output.


With a secret for ProGet in place time to deploy the N8N application.

What have we done?

  • Installed a secret
  • Confirmed the secret was available
  • Confirmed the secret is visible

Deploy N8N Image from ProGet server

We need 2 files a deployment file and an ingress file, actually this can be the same file just with a — between the two read i've kept them seperate

n8n-deployment file

apiVersion: apps/v1
kind: Deployment
    run: n8n-server
  name: n8n-deployment
  replicas: 1
      run: n8n-deployment
        run: n8n-deployment
      - image:
        name: n8n-server
      - name: progercred

What does this mean?

apiVersion: apps/v1
kind: Deployment

Let K8s know this is a deployment

  replicas: 1

In the first instance only a single replica

      - image:
        name: n8n-server
      - name: progercred

Pull down the image from the local repo using the creds we just set up.

Deploy using

kubectl create -f n8n-traefik-deployment.yaml

Check using

kubectl get deployments.apps n8n-deployment
kubectl describe deployments.apps n8n-deployment

Both will show the image deployed and the number of replicas (1)

Expose the container port to Kubernetes

kubectl expose deploy n8n-deployment --port 5678 --type LoadBalancer
Note: I'm pretty sure this needs to be added to the deployment file

Next, we create the ingress for Traefik


kind: IngressRoute
  name: n8n-traefik
  namespace: default
    - n8n
    - match: Host(`n8n.lan`)
      kind: Rule
        - name: n8n-deployment
          port: 5678

What does this mean?

kind: IngressRoute

This tells Kubernetes we are interacting with Traefik

  name: n8n-traefik
  namespace: default

This sets the name of the object we will be working with

    - n8n

This goes back to the traefikvalues.yaml file we edited to add an entrypoint for n8n on port 5678/TCP

    - match: Host(`n8n.lan`)
      kind: Rule
        - name: n8n-deployment
          port: 5678

This tells Traefik if something hits port 5678 and is addressed n8n.lan then forward it onto  the n8n-deployment container we just deployed on its exposed port 5678

Again please use ` not ' in the above they are different

Deploy this

kubectl create -f n8n-traefik-ingress.yaml

Check its applied

 kubectl get ingressroute


NAME          AGE
dashboard     25h
n8n-traefik   24h

I can see the dashboard ingress rule still

Check that the rule is correct

kubectl describe ingressroute n8n-traefik

ends with

   Kind:   Rule
   Match:  Host(`n8n.lan`)
      Name:  n8n-deployment
      Port:  5678
Events:      <none>

The N8N app should now be deployed

Test N8N install

At this point either add or your preferred DNS to your DNS server or to the local PC's hosts file

and go to


and you should see

The Traefik Dashboard



Click on Explore on the routers tab

This will display all the active routes

Click on the N8N route to see what is happening under the hood

This can be handy when troubleshooting as errors are displayed here


On Monday I'd never touched Kubernetes, by Friday I've got my own app running off my own docker repo from a docker image i created using a Jenkins pipeline.

It's been hard though, there is a lot of wrong, poorly worded and downright terrible documentation around Kubernetes out there and I know I've just scratched the surface. This post is about the fact that the above can be done.

Moving forward I'll be investigating

  • https
  • secrets
  • certificate management
  • security
  • scaling

Over the next week, I'll look at the above and see if i can get Rocket chat and a couple of other Docker containers i use running scaled on a Kubernetes cluster.

The end result of this is to have the knowledge to migrate this off my own K8s Cluster onto one running on AWS...

Further Reading

zero to code - Tech Blog Posts - David Field
Linux, tech, gadgets, chromebook, chromeos, redhat, ansible, technology, android, tweaks, software, tools
MetalLB, bare metal load-balancer for Kubernetes
Install Traefik - Traefik
Traefik Documentation
How To Install and Use Docker on Ubuntu 20.04 | DigitalOcean
Docker is an application that simplifies the process of managing application processes in containers. In this tutorial, you’ll install and use Docker Community Edition (CE) on Ubuntu 20.04. You’ll install Docker itself, work with containers and images, and push an image to a Docker Repository.
About Calico
The value of using Calico for networking and network security for workloads and hosts.
Production-Grade Container Orchestration
A Deployment provides declarative updates for Pods and ReplicaSets.You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments …
What is Helm in Kubernetes?
A step-by-step demo showing how to deploy your JFrog Artifactory HA cluster in Kubernetes using Helm Charts: 1. Create Helm Chart Repository 2. Setup Helm Client 3. View Helm Chart in Artifactory 4. Install Artifactory HA (Primary & Secondary Nodes) 5. View Nodes in Kubernetes Pods 6. Get New Artif…
Dashboard - Traefik
Traefik Documentation

Share Tweet Send
You've successfully subscribed to Tech Blog Posts - David Field
Great! Next, complete checkout for full access to Tech Blog Posts - David Field
Welcome back! You've successfully signed in
Success! Your account is fully activated, you now have access to all content.