Docker, you’ve probably used docker run to launch some different containerized applications, however, you’d love to know how to build your own container, maybe you’ve got an application or a script you’d like to put in a container.

Hopefully, this may make the start of that journey much simpler by explaining first how to manually build an Ubuntu 20.04 base image, ensure it’s updated and patched then as an example an NPM application called N8N will be built inside a container and run.

Everything in this post is self-hosted, including a private Docker Repository using ProGet.

Version Date Description Notes
1.0 7 June 2021 First Draft

The Purpose of this post

There are three things this post looks to cover in both a manual method and as Jenkins Pipelines

1) Create our own Ubuntu 20.04 base image

2) Patch the OS

3) Install the NPM application N8N

n8n is an extendable workflow automation tool which enables you to connect anything to everything via its open, fair-code model.

Fair-Code Automation with
I recently installed a server which I covered here Rocket Chat, yes it’s better than Teams.There are a fair few Rocket Chat blogspublished over the last few weeks. Sothis post is going to show you how toinstall it and some cool things you can dowith it. Rocket.Chat – The LeadingCom…

However, this raises some questions?

1) I can download a core Ubuntu base image from DockerHub?

This is correct you can, however, if you’re working in an environment where you need to have an audit-ability of the whole container process maybe knowing where your images came from is a useful step.

I appreciate this step isn’t for everyone, and for me, it was more of a “How do I do this” question than anything else.

If however, you are interested in some reading, have a look here

Half of 4 Million Public Docker Hub Images Found to Have Critical Vulnerabilities
A recent analysis of around 4 million Docker Hub images by cyber security firm Prevasio found that 51% of the images had exploitable vulnerabilities. A large number of these were cryptocurrency miners, both open and hidden, and 6,432 of the images had malware.

Malicious Docker Hub Container Images Used for Cryptocurrency Mining – Security News
In our monitoring of Docker-related threats, we came across a threat actor who uploaded malicious images to Docker Hub for cryptocurrency mining.

2) Why not make “create a base OS” and “patch the OS” at the same time”?

Because as first Dockerfiles go, this is a pretty simple one and it’s nice to have success early. It also means you have a reference base OS image and a patched OS image that might (or might not) be useful to you.

3) Why N8N?

N8N is a node.js app that requires a version of node.js which is above the one in the Ubuntu Repos (at time of writing), it has a virtual front end, keeps its data in a specific directory and needs a port open to access it which to me seemed to be the basic functionality of a container

  • Install
  • Ports
  • Volumes
  • Webapp

It’s also a pretty useful app if you’re into IFTTT/Zapier type stuff

Things you’ll Need

To move forward with this you’ll need a few things which I’ve not covered in this post:

  • A Jenkins Server
  • Ubuntu 20.04 (VM is ok)
  • Java Headless JRE and Docker-CE installed on Ubuntu
  • The Ubuntu Server as a Jenkins Agent.
  • Proget Private Docker Repository

Google and some of the links at the end will help with this.


  • These are not production-ready commands
  • I’m no guru
  • Spelling mistakes, I’ll make a few.  

So how do we do this?


Create a Core Ubuntu base image

The first step is on an Ubuntu 20.04 installation is to install debootstrap the tool used to create the base image

sudo apt install debootstrap

debootstrap is a tool that will install a Debian/Ubuntu base system into a subdirectory of another, already installed system. It doesn’t require an installation CD, just access to a Debian/Ubuntu repository. It can also be installed and run from another operating system, so, for instance, you can use debootstrap to install Debian onto an unused partition from a running Gentoo system. It can also be used to create a rootfs for a machine of a different architecture, which is known as “cross-debootstrapping”

To create a bootstrap of 20.04 (Codename Focal) the following commands are run

sudo debootstrap focal focal

or if you don’t want to see what’s going on

sudo debootstrap focal focal > /dev/null

This command MUST be run with root privileges and will take about 5 mins max to run through  

You’ll see a bunch of lines go past

I: Retrieving InRelease I: Checking Release signatureI: Valid Release signature (key id F6ECB3762474EDA9D21B7022871920D1991BC93C)I: Retrieving Packages I: Validating Packages I: Resolving dependencies of required packages...I: Resolving dependencies of base packages...I: Checking component main on Retrieving adduser 3.118ubuntu2I: Validating adduser 3.118ubuntu2I: Retrieving apt 2.0.2I: Validating apt 2.0.2I: Retrieving apt-utils 2.0.2I: Validating apt-utils 2.0.2I: Retrieving base-files 11ubuntu5......I: Configuring python3-gi...I: Configuring ubuntu-minimal...I: Configuring libc-bin...I: Configuring systemd...I: Configuring ca-certificates...I: Base system installed successfully.

once complete there will be a folder focal

/demo$ ls focal -ltotal 60lrwxrwxrwx  1 root root    7 Jun  4 16:29 bin -> usr/bindrwxr-xr-x  2 root root 4096 Apr 15  2020 bootdrwxr-xr-x  4 root root 4096 Jun  4 16:29 devdrwxr-xr-x 59 root root 4096 Jun  4 16:29 etcdrwxr-xr-x  2 root root 4096 Apr 15  2020 homelrwxrwxrwx  1 root root    7 Jun  4 16:29 lib -> usr/liblrwxrwxrwx  1 root root    9 Jun  4 16:29 lib32 -> usr/lib32lrwxrwxrwx  1 root root    9 Jun  4 16:29 lib64 -> usr/lib64lrwxrwxrwx  1 root root   10 Jun  4 16:29 libx32 -> usr/libx32drwxr-xr-x  2 root root 4096 Jun  4 16:29 mediadrwxr-xr-x  2 root root 4096 Jun  4 16:29 mntdrwxr-xr-x  2 root root 4096 Jun  4 16:29 optdrwxr-xr-x  2 root root 4096 Apr 15  2020 procdrwx------  2 root root 4096 Jun  4 16:29 rootdrwxr-xr-x  8 root root 4096 Jun  4 16:29 runlrwxrwxrwx  1 root root    8 Jun  4 16:29 sbin -> usr/sbindrwxr-xr-x  2 root root 4096 Jun  4 16:29 srvdrwxr-xr-x  2 root root 4096 Apr 15  2020 sysdrwxrwxrwt  2 root root 4096 Jun  4 16:29 tmpdrwxr-xr-x 13 root root 4096 Jun  4 16:29 usrdrwxr-xr-x 11 root root 4096 Jun  4 16:29 var

This contains the base bootstrap to get Ubuntu 20.04 server up and running..

Before this is done however lets login

Login to Docker

If you ran through my previous post about setting up Proget and disabled Anonymous Push to the Repo then you’ll need to login to the Private repo


  • I have created a user called David with access to my docker feeds, if you’ve not done this you can use Admin as the username while testing
  • the IP is that of my Dev Proget instance, your IP/DNS address will be different


docker login --username david

If you want to script this you can use

echo myweakpassword > password.txt

cat password.txt | docker login --username david --password-stdin

For reference, this creates a file under ~/.docker/config.json

{    "auths": {            "": {                    "auth": "ZGF2aWQ6NTRtNXVuRz8="            }    }}

Create a Docker Image from the Bootstrap

This needs to be compressed and imported to Docker as an image

sudo tar -C focal -c . | docker import -

This will result in a checksum


Which we can see has imported into the local docker image repository

docker image ls192.168.40.43:8081/dockerimages/base_ubuntu_focal    latest   0a86dd06c377   5 hours ago         322MB

The base image has been created

This needs to be pushed up to the Private Proget Docker Repo which is done using the command.

docker push

Within ProGet

What have we done?

  • Created an Ubuntu 20.04 Base Image using debootstrap
  • Turned that into a docker image
  • Uploaded it to the local repo
  • Pushed it to the remote ProGet repo.

Update the Ubuntu Base Image using a Dockerfile

The Ubuntu base image we have created is the bare minimum needed to run the Ubuntu Server Os within docker and to be secure will need patching before we use it to create our N8N image.

Having a Jenkins pipeline later which will keep this patched every couple of weeks and an audit of previous images is also going to be useful.

The way to do this is to create a Dockerfile, the best description I’ve seen of this is a Dockerfile is a “virtual assistant” which has a set of commands to perform inside a container on your behalf.

##Grab Ubuntu base image from my repoFROM working DirectoryWORKDIR /opt##Copy complete sources listCOPY sources.list /etc/apt/sources.list.d/##Update the imageRUN apt-get update && apt-get upgrade -y

How does this break down?

This is pretty readable stuff…

##Grab Ubuntu base image from my repoFROM

FROM where is the base image Docker will run from is being pulled from, this is the base image we created in the first stage.

##Set working DirectoryWORKDIR /opt

WORKDIR Which Directory should be the working directory, in this example its actually irrelevant because I’m not using a working directory.

##Copy complete sources listCOPY sources.list /etc/apt/sources.list.d/

COPY In the same folder as the docker file is a file called sources.list (below) which is a basic Ubuntu Focal sources.list, COPY will copy this file to the sources.list.d folder in the docker base image.

deb focal main restricteddeb focal-updates main restricteddeb focal universedeb focal-updates universedeb focal multiversedeb focal-updates multiversedeb focal-backports main restricted universe multiversedeb focal-security main restricteddeb focal-security universedeb focal-security multiverse

The sources.list on the base container created earlier is the first line only with no restricted repo.

##Update the imageRUN apt-get update && apt-get upgrade -y

RUN will run the command(s) within the container, in this example, I want to run a patching update. You may notice that there is no sudo used as everything within the default Docker setup is run as root.

Build the Patched base image

Then Run (the dot at the end specifies that there is a Dockerfile in this folder)

In the same folder as the Dockerfile and the sources.list files run the following command.

docker build -t .

Note the . at the end of the line, that’s not a type, it’s telling docker build to look for a Dockerfile in the same folder.

Once the build has complete (and it’s a verbose action)  the generated docker image will be visible if you type

docker image ls

Will display   latest        e39e825b65ff   2 days ago     717MB

You’ll note that the image size has increased from 322Mb to 717MB as patches have been added.

This image should be pushed to the ProGet Private docker repository

docker push

This will result in 2 images in the dockerimages repo, the base_ubuntu_local and the base_ubuntu_local_patched images.

If you want to jump onto the new patched image and access the bash shell then the following command can be run.

docker run -it /bin/bash

Create a Docker image which runs N8N


##Grab Ubuntu base image from my repoFROM latest NodeJS and NPMRUN apt-get install -y curlRUN curl -sL | sudo -E bash -RUN apt-get install -y gcc g++ makeRUN apt-get install -y nodejs##Install N8N SoftwareRUN npm install -g npm@latest##OpenFirewall PortEXPOSE 5678##Install N8NRUN npm install n8n -g##Run N8NCMD "n8n"

Again, let’s break this down

##Grab Ubuntu base image from my repoFROM

FROM In this Dockerfile we use the recently patched docker image created in the last stage.

##Install latest NodeJS and NPMRUN apt-get install -y curlRUN curl -sL | sudo -E bash -RUN apt-get install -y gcc g++ makeRUN apt-get install -y nodejs

RUN The NodeJS provided in the core Ubuntu Repos is (at the time of writing) far too old to un N8N, so the RUN directive will install curl, pull down a method of setting up the latest nodejs and npm and also install some prerequisites.

##Install N8N SoftwareRUN npm install -g npm@latest

RUN With the latest NodeJS and NPM installed this command just makes sure NPM is actually up to date.

##OpenFirewall PortEXPOSE 5678

EXPOSE This will “expose” the TCP Network port 5678 for the container. when using docker run the -p will map the port of the host to the exposed docker port.

##Install N8NRUN npm install n8n -g

RUN With NodeJS, NPM installed and up to date and the TCP Port exposed the run command is used to install N8N

##Run N8NCMD "n8n"

CMD I had to do a bit of research to find the difference between RUN and CMD and it appears

RUN is an image build step, the state of the container after a RUN command will be committed to the container image. A Dockerfile can have many RUN steps that layer on top of one another to build the image.

CMD is the command the container executes by default when you launch the built image. A Dockerfile will only use the final CMD defined. The CMD can be overridden when starting a container with docker run $image $other_command.

So RUN is run multiple times and commits things to the docker image, CMD is used to execute the actual process that will run in the container.

Build the N8N Image

From the folder the Dockerfile is located in run.

docker build -t .

This will verbosely run through, and a couple of times might look like an error is being generated (these are warnings, not errors)

Push this to the dockercontainers repo on ProGet

docker push

The newly built container image is pushed to the private repo.

Run the N8N Docker Image

Having done all the hard work, it’s now possible to run the image and use N8N, there are two ways to run the docker container image, in a verbose mode which is good for testing and detached which will carry on running even if you close the shell window that launched the docker container image.

Run verbose

docker run -it --rm -d --name n8n -p 5678:5678 -v /home/david/.n8n:/root/.n8n

This will verbosely run the output of the command on the screen

Run detached

docker run -it --rm -d --name n8n -p 5678:5678 -v /home/david/.n8n:/root/.n8n

You’ll note the following

-p - Maps the hosts tcp/5678 port to the exposed port on the docker image-v - I've mapped the folder the image puts the data files in so N*N will run over restarts

If you open up the URL of the host the N8N Docker container was run-in

http://<ip address or DNS>:5678

What have we done?

  • Created a Docker base image.
  • Patched the base image.
  • Installed N8N using the latest NodeJS and NPM
  • Pushed all the images to a private ProGet Repository
  • Run the N8N Image and opened the application

Some Notes

For troubleshooting here are some useful commands

Publish to Private repos/change tags

Above I’ve tagged the images as the latest, to change the tag on the image run to change the tag from latest to 1.0.0

docker tag n8n:1.0.0 push

Hop into a running container and have a look around

If the Docker Run command seems to exit ok, but you can’t run the web apps,

docker ps

Make a note of the “container id” then run

docker exec -it <container_id_or_name> /bin/bash

This will drop you in the bash of the running container. you can install software and run troubleshooting to find the problem in the container from here.

Jenkins Pipeline

the above explains what is going on under the hood, you’ll want to run this regularly for patching, and bug fixes on different docker images. To do this there are several tools available Rundeck is another option. This tutorial however focuses on using Jenkins as an Automation Orchestrator.

The Jenkinsfile

Jenkins can be fed a Jenkinsfile which much like a Dockerfile is a set of commands to tell Jenkins what to do in order to Build or Automate something. Using Jenkinsfiles means that your code can be held in an SCM repository like Gitlab or Bitbucket and version controlled.

The alternative is to build the Jenkins pipeline within the Jenkins interface.

This is good for development and testing, however not good for production.

Some Reading

If you’ve never used Jenkins or Jenkinsfiles before, then I’d suggest having a read of this post I wrote in 2018

Jenkins — Pipeline (Beginners guide)
The following document is designed to explain what Jenkins Pipelining is, and provide examples of a Jenkins pipeline which runs a basic script on a Jenkins slave.

I found what was online a garbled chop shop of assumption and needed to get some clarity in my own mind as to how Jenkins Pipelines work it might help.

Need Setup In Jenkins before you start

Before any of this will work you’ll need the following setup in Jenkins


Out of the box, Jenkins doesn’t have a clue what Docker is and uses plugins to resolve this I’d suggest installing the following plugins:

Docker Pipeline
Build and use Docker containers from pipelines.

This plugin integrates Jenkins with <a href=“” target=“_blank” rel=“nofollow noopener noreferrer”>Docker</a>

Docker Slaves
Uses <a href=“” target=“_blank” rel=“nofollow noopener noreferrer”>Docker</a> containers to run Jenkins build agents.

This plugin allows to add various docker commands to your job as build steps.

Even if they are not used now, they may help in the future.

Ensure Docker is restarted after the plugins are installed

Where is Docker?

Jenkins needs to know where the docker binary is installed when a docker related pipeline is executed

From the Home Dashboard head to Manage Jenkins -> Global Tool configuration and scroll down until you see Docker.

Fill in the blanks and click on Save

A remote Ubuntu node

As a rule, I try and keep the Jenkins server for only orchestrating the jobs on and I’d have a remote node running the actual pipeline. This stops the Jenkins server from getting overloaded and keeps things easy to manage.

As the process we are automating uses Ubuntu, I’d suggest installing Ubuntu server 20.04 in a VM with a minimum of 2Gb Ram, 50Gb Hard Disk and 2 CPU’s on a network accessible by the Jenkins server.

Once installed run:

sudo apt updatesudo apt dist-upgrade -y

Install Docker

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] focal stable"
sudo apt update
apt-cache policy docker-ce

sudo apt install docker-ce

sudo systemctl restart docker

sudo systemctl enable docker

sudo usermod -aG docker ${USER}

Further information in the links below

Install Java

Jenkins (at the time of writing) only supports JRE 8 or JRE 11 so install the headless version with

sudo apt install openjdk-11-jre-headless

Create a Jenkins workspace folder

Jenkins needs a place to work, i tend to put this out of the home folder, you can put this anywhere however remember where you added it.

sudo mkdir /jenkinssudo chown -R ${USER}:${USER} /jenkins

(Optional) Install Passwordless SSH

You’ll need creds to connect to this remote machine, in this example I’ll use a username/password however you might want to (and probably should) use passwordless SSH.

2 Simple Steps to Set up Passwordless SSH Login on Ubuntu
This tutorial explains how to set up passwordless SSH login on an Ubuntu desktop. There’re basically two ways of authenticating user login with OpenSSH server: password authentication and public key authentication. The latter is also known as passwordless SSH login because you don’t need to enter yo…

Add a Node in Jenkins

From the Jenkins Dashboard click on Manage Jenkins -> Manage Nodes and Clouds

Click on New Node and give the node a name and select Permanent Agent

Click Save

In the Configuration screen ensure that the following is set

Setting Detail Notes
Name devubuntu
Description Ubuntu Remote Docker Device
Number of Executors 1
Remote Root Directory /jenkins
Labels dev docker ubuntu
Usage Only build jobs with label expressions matching the node
Launch Method Launch Agents via SSH
Host IP or DNS of Ubuntu Server
Credentials See following notes
Host Key Verification Non Verifying stratagy Self Signed certs
Availability Keep Agent Online as much as possible

For Credentials Click on Add -> Jenkins

Complete the popup with the method of connecting to your Ubuntu Server

For this example use Username with Password unless you already have passwordless SSH Working

Add a Username, Password and the ID which is how Jenkins will display the creds

Click on Save

Select the Creds you entered and click on Save

Click on Log and you should see similar output

[06/07/21 11:21:46] [SSH] Checking java version of java[06/07/21 11:21:46] [SSH] java -version returned 11.0.11.[06/07/21 11:21:46] [SSH] Starting sftp client.[06/07/21 11:21:46] [SSH] Copying latest remoting.jar...Source agent hash is F7B9C09212C05E6A48C6A52793BBFC04. Installed agent hash is F7B9C09212C05E6A48C6A52793BBFC04Verified agent jar. No update is necessary.Expanded the channel window size to 4MB[06/07/21 11:21:46] [SSH] Starting agent process: cd "/jenkins" && java  -jar remoting.jar -workDir /jenkins -jar-cache /jenkins/remoting/jarCacheJun 07, 2021 10:21:47 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDirINFO: Using /jenkins/remoting as a remoting work directoryJun 07, 2021 10:21:47 AM org.jenkinsci.remoting.engine.WorkDirManager setupLoggingINFO: Both error and output logs will be printed to /jenkins/remoting<===[JENKINS REMOTING CAPACITY]===>channel startedRemoting version: 4.7This is a Unix agentWARNING: An illegal reflective access operation has occurredWARNING: Illegal reflective access by jenkins.slaves.StandardOutputSwapper$ChannelSwapper to constructor Please consider reporting this to the maintainers of jenkins.slaves.StandardOutputSwapper$ChannelSwapperWARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operationsWARNING: All illegal access operations will be denied in a future releaseEvacuated stdoutAgent successfully connected and online

Specifically the last line

Agent successfully connected and online

And the new node should be listed under Nodes

If there are errors it’s one of 2 things

  1. Java 11 not installed
  2. Rights on /jenkins folder

What have we done?

  • Added Docker Plugins to Jenkins
  • Created a Remote Jenkins Ubuntu Node
  • Installed Docker on the Remote Jenkins node
  • Connected the node to the Jenkins server using SSH


The reading I’ve done over the years has  lead me to keep my Jenkinsfiles as self-contained as possible, so they contain everything I need in the Jenkinsfile.

I have 3 Jenkins files in 3 separate projects in a group in

Jenkinsfile – Create Base Image

pipeline {environment {  imagename = ""  pushimage = ""}agent { label 'docker' }options {    ansiColor('xterm')}stages {    stage('CleanWorkspace') {        steps {            cleanWs()        }    }    stage('GITLab Checkout') {        steps {            git branch: 'master', credentialsId: 'lancert', url: ""'              }    }    stage('Build image') {        steps {            sh "sudo apt-get install -y debootstrap"            sh "debootstrap focal focal > /dev/null"            sh "tar -C focal -c . | docker import -"            sh "docker push"        }    }   stage('Push image') {      steps {       withDockerRegistry([credentialsId: 'ProGet_david', url: '']) {          sh 'docker push $pushimage'          }       }    }}}

This is broken down as follows

Environment Variables here are global to the whole Jenkinsfile, I’ve set the URL for the ProGet Docker Repo

environment {  imagename = ""  pushimage = ""}

The Node this  should run on is defined using the Docker label assigned to if

agent { label 'docker' }options {    ansiColor('xterm')}

The Options section under agent has Jenkins display colour coding in its output and needs the following plugin installed.

Adds ANSI coloring to the Console Output

The Stages refer to the green boxes seen on the Project page

I make a habit of cleaning up the /jenkins workspace for the project before I start as not doing so has caused debugging issues. I like to make sure this is idempotent.

    stage('CleanWorkspace') {        steps {            cleanWs()        }    }

There is a checkout of all the GitLab code, this is a repeat of the first stage above, however, it’s in the code, and won’t do much.

    stage('GITLab Checkout') {        steps {            git branch: 'master', credentialsId: 'lancert', url: ""'              }    }

The build stage in this instance is just a set of commands for Ubuntu to run locally.

    stage('Build image') {        steps {            sh "sudo apt-get install -y debootstrap"            sh "debootstrap focal focal > /dev/null"            sh "tar -C focal -c . | docker import -"            sh "docker push"        }    }

I’ve not added a test here, and If I was I’d do a “docker image ls” and make sure my image was imported

Push the image up to the Proget server using withDockerRegistry If the URL isn’t added here then Jenkins will attempt an upload to

   stage('Push image') {      steps {       withDockerRegistry([credentialsId: 'ProGet_david', url: '']) {          sh 'docker push $pushimage'          }       }    }

Jenkinsfile – Patch Base Image

It’s possible to have this Jenkins file using the Jenkins Pipeline method described below to have this Pipeline watch the above pipeline and if there is a successful build then automatically run this Pipeline

Select Build after other projects are built and watch the first project.

pipeline {environment {  imagename = ""  pushimage = ""}agent { label 'docker' }options {    ansiColor('xterm')}stages {    stage('CleanWorkspace') {        steps {            cleanWs()        }    }    stage('GITLab Checkout') {        steps {            git branch: 'master', credentialsId: 'lancert', url: ''              }    }    stage('Build image') {        steps {            script {                updateme = imagename            }        }    }    stage('Test image') {        steps {            script {        updateme.inside {         sh 'echo "Tests passed"'                }            }        }    }   stage('Push image') {      steps {       withDockerRegistry([credentialsId: 'ProGet_david', url: '']) {          sh 'docker push $pushimage'          }       }    }}}

You’ll see that most of the format for the Jenkinsfile is the same using the same stages  

The two differences are

The Build image stage

    stage('Build image') {        steps {            script {                updateme = imagename            }        }    }

Using a variable updateme, the method will use the Dockerfile held in the same repository and folder as the Jenkinsfile and build it tagging it with the details of the variable imagename declared at the top of the Jenkinsfile.

The updateme variable is then used with the docker inside method in this test

    stage('Test image') {        steps {            script {        updateme.inside {         sh 'echo "Tests passed"'                }            }        }    }

At this stage, Jenkins will spin up a copy of the Image build in the previous stage as a container and run the echo command.

These tests can be expanded to check the last update time or any other tests.

Once the test is complete Jenkins will take the container down and remove it.

It’s worth noting that in the last stage of this file there is a line

sh 'docker push $pushimage'

The variable pushimage is declared at the tip of the Jenkins file, because it is being used in the sh (shell) context it is displayed as $pushimage

Jenkinsfile – Install N8N

The final Jenkinsfile is the same as the last one except the variables imagename and push name have changed.

pipeline {environment {  imagename = ""  pushimage = ""}agent { label 'docker' }options {    ansiColor('xterm')}stages {    stage('CleanWorkspace') {        steps {            cleanWs()        }    }    stage('GITLab Checkout') {        steps {            git branch: 'master', credentialsId: 'lancert', url: ''              }    }    stage('Build image') {        steps {            script {                n8n = imagename            }        }    }    stage('Test image') {        steps {            script {        n8n.inside {         sh 'echo "Tests passed"'                }            }        }    }   stage('Push image') {      steps {       withDockerRegistry([credentialsId: 'ProGet_david', url: '']) {          sh 'docker push $pushimage'          }       }    }}}

What have we learnt here?

  • Using Jenkinsfiles
  • The format of a Jenkinsfile
  • How Stages are displayed
  • Using $variable with sh
  • Using docker.inline for testing

Using a Jenkinsfile in Jenkins

We have 3 Jenkinsfiles, how are they used in Jenkins? They can be used in mainly Pipeline or MultiBranch pipeline, as the complexity is low this post will configure a pipeline to pull down the Jenkinsfiles.

From the Home Dashboard click on New Item

Select Pipeline and click on OK

Enter a Description, scroll down to Pipeline


  • Pipeline script from SCM
  • SCM – Git
  • Enter the Repo URL and select the credentials
  • Choose the Branch (Leave as default if unsure)

Scroll down and make sure Script Path has Jenkinsfile in there

Click on Save

From the Job screen

Click Build Now

The Build will create the stages listed in the Jenkinsfile and run through them

Click on the Current top build number

Repeat this for the three Jenkins files.


There is a lot here, I’ve put this all down as notes for myself and hope they might help others. This is only just scratching the surface of what can be done using Jenkins, Docker and Automation.

Further Reading

Pushing builds to DockerHub via Jenkins Pipelines – Brightbox

How to Connect to Remote SSH Agents?
Issue How to do the initial connection of SSH agents to Jenkins, using SSH keys. Environment CloudBees CI (CloudBees Core)CloudBees Jenkins EnterpriseCloudBees Jenkins TeamCloudBees Jenkins P…

Jenkins – an open source automation server which enables developers around the world to reliably build, test, and deploy their software

Setting up ProGet as a Private Docker Repository
When creating your own Docker images it’s useful to have a local private repo tokeep those not so stable versions on. The usual go-to for this would be theSonatype Nexus3 OSS version which is a great bit of software if somewhat GUIunfriendly and upsell driven these days. My preferred alternative…

Create a base image
How to create base images

How to Install Node.js on Ubuntu and Update npm to the Latest Version
If you try installing the latest version of node using the apt-package manager,you’ll end up with v10.19.0. This is the latest version in the ubuntu app store,but it’s not the latest released version of NodeJS. This is because when new versions of a software are released, it can take monthsfor …

How To Install and Use Docker on Ubuntu 20.04 | DigitalOcean
Docker is an application that simplifies the process of managing application processes in containers. In this tutorial, you’ll install and use Docker Community Edition (CE) on Ubuntu 20.04. You’ll install Docker itself, work with containers and images, and push an image to a Docker Repository.

Using a Jenkinsfile
Jenkins – an open source automation server which enables developers around the world to reliably build, test, and deploy their software

By davidfield

Tech Entusiast