Docker, you’ve probably used docker run to launch some different containerized applications, however, you’d love to know how to build your own container, maybe you’ve got an application or a script you’d like to put in a container.
Hopefully, this may make the start of that journey much simpler by explaining first how to manually build an Ubuntu 20.04 base image, ensure it’s updated and patched then as an example an NPM application called N8N will be built inside a container and run.
Everything in this post is self-hosted, including a private Docker Repository using ProGet.
Version | Date | Description | Notes |
1.0 | 7 June 2021 | First Draft | |
The Purpose of this post
There are three things this post looks to cover in both a manual method and as Jenkins Pipelines
1) Create our own Ubuntu 20.04 base image
2) Patch the OS
3) Install the NPM application N8N


However, this raises some questions?
1) I can download a core Ubuntu base image from DockerHub?
This is correct you can, however, if you’re working in an environment where you need to have an audit-ability of the whole container process maybe knowing where your images came from is a useful step.
I appreciate this step isn’t for everyone, and for me, it was more of a “How do I do this” question than anything else.
If however, you are interested in some reading, have a look here


2) Why not make “create a base OS” and “patch the OS” at the same time”?
Because as first Dockerfiles go, this is a pretty simple one and it’s nice to have success early. It also means you have a reference base OS image and a patched OS image that might (or might not) be useful to you.
3) Why N8N?
N8N is a node.js app that requires a version of node.js which is above the one in the Ubuntu Repos (at time of writing), it has a virtual front end, keeps its data in a specific directory and needs a port open to access it which to me seemed to be the basic functionality of a container
- Install
- Ports
- Volumes
- Webapp
It’s also a pretty useful app if you’re into IFTTT/Zapier type stuff
Things you’ll Need
To move forward with this you’ll need a few things which I’ve not covered in this post:
- A Jenkins Server
- Ubuntu 20.04 (VM is ok)
- Java Headless JRE and Docker-CE installed on Ubuntu
- The Ubuntu Server as a Jenkins Agent.
- Proget Private Docker Repository
Google and some of the links at the end will help with this.
Disclaimer
- These are not production-ready commands
- I’m no guru
- Spelling mistakes, I’ll make a few.
So how do we do this?
Manually
Create a Core Ubuntu base image
The first step is on an Ubuntu 20.04 installation is to install debootstrap the tool used to create the base image
sudo apt install debootstrap
debootstrap is a tool that will install a Debian/Ubuntu base system into a subdirectory of another, already installed system. It doesn’t require an installation CD, just access to a Debian/Ubuntu repository. It can also be installed and run from another operating system, so, for instance, you can use debootstrap to install Debian onto an unused partition from a running Gentoo system. It can also be used to create a rootfs for a machine of a different architecture, which is known as “cross-debootstrapping”
To create a bootstrap of 20.04 (Codename Focal) the following commands are run
sudo debootstrap focal focal
or if you don’t want to see what’s going on
sudo debootstrap focal focal > /dev/null
This command MUST be run with root privileges and will take about 5 mins max to run through
You’ll see a bunch of lines go past
I: Retrieving InRelease I: Checking Release signatureI: Valid Release signature (key id F6ECB3762474EDA9D21B7022871920D1991BC93C)I: Retrieving Packages I: Validating Packages I: Resolving dependencies of required packages...I: Resolving dependencies of base packages...I: Checking component main on http://archive.ubuntu.com/ubuntu...I: Retrieving adduser 3.118ubuntu2I: Validating adduser 3.118ubuntu2I: Retrieving apt 2.0.2I: Validating apt 2.0.2I: Retrieving apt-utils 2.0.2I: Validating apt-utils 2.0.2I: Retrieving base-files 11ubuntu5......I: Configuring python3-gi...I: Configuring ubuntu-minimal...I: Configuring libc-bin...I: Configuring systemd...I: Configuring ca-certificates...I: Base system installed successfully.
once complete there will be a folder focal
/demo$ ls focal -ltotal 60lrwxrwxrwx 1 root root 7 Jun 4 16:29 bin -> usr/bindrwxr-xr-x 2 root root 4096 Apr 15 2020 bootdrwxr-xr-x 4 root root 4096 Jun 4 16:29 devdrwxr-xr-x 59 root root 4096 Jun 4 16:29 etcdrwxr-xr-x 2 root root 4096 Apr 15 2020 homelrwxrwxrwx 1 root root 7 Jun 4 16:29 lib -> usr/liblrwxrwxrwx 1 root root 9 Jun 4 16:29 lib32 -> usr/lib32lrwxrwxrwx 1 root root 9 Jun 4 16:29 lib64 -> usr/lib64lrwxrwxrwx 1 root root 10 Jun 4 16:29 libx32 -> usr/libx32drwxr-xr-x 2 root root 4096 Jun 4 16:29 mediadrwxr-xr-x 2 root root 4096 Jun 4 16:29 mntdrwxr-xr-x 2 root root 4096 Jun 4 16:29 optdrwxr-xr-x 2 root root 4096 Apr 15 2020 procdrwx------ 2 root root 4096 Jun 4 16:29 rootdrwxr-xr-x 8 root root 4096 Jun 4 16:29 runlrwxrwxrwx 1 root root 8 Jun 4 16:29 sbin -> usr/sbindrwxr-xr-x 2 root root 4096 Jun 4 16:29 srvdrwxr-xr-x 2 root root 4096 Apr 15 2020 sysdrwxrwxrwt 2 root root 4096 Jun 4 16:29 tmpdrwxr-xr-x 13 root root 4096 Jun 4 16:29 usrdrwxr-xr-x 11 root root 4096 Jun 4 16:29 var
This contains the base bootstrap to get Ubuntu 20.04 server up and running..
Before this is done however lets login
Login to Docker
If you ran through my previous post about setting up Proget and disabled Anonymous Push to the Repo then you’ll need to login to the Private repo
Note:
- I have created a user called David with access to my docker feeds, if you’ve not done this you can use Admin as the username while testing
- the IP is that of my Dev Proget instance, your IP/DNS address will be different
Login
docker login --username david 192.168.40.43:8081
If you want to script this you can use
echo myweakpassword > password.txt
cat password.txt | docker login --username david --password-stdin 192.168.40.43:8081
For reference, this creates a file under ~/.docker/config.json
{ "auths": { "192.168.40.43:8081": { "auth": "ZGF2aWQ6NTRtNXVuRz8=" } }}
Create a Docker Image from the Bootstrap
This needs to be compressed and imported to Docker as an image
sudo tar -C focal -c . | docker import - 192.168.40.43:8081/dockerimages/base_ubuntu_focal
This will result in a checksum
sha256:bddd4e4d849e39df3b951dcf4a0301752ed4a2739e12af18b9f090a27199e24f
Which we can see has imported into the local docker image repository
docker image ls192.168.40.43:8081/dockerimages/base_ubuntu_focal latest 0a86dd06c377 5 hours ago 322MB
The base image has been created
This needs to be pushed up to the Private Proget Docker Repo which is done using the command.
docker push 192.168.40.43:8081/dockerimages/base_ubuntu_focal:latest
Within ProGet

What have we done?
- Created an Ubuntu 20.04 Base Image using debootstrap
- Turned that into a docker image
- Uploaded it to the local repo
- Pushed it to the remote ProGet repo.
Update the Ubuntu Base Image using a Dockerfile
The Ubuntu base image we have created is the bare minimum needed to run the Ubuntu Server Os within docker and to be secure will need patching before we use it to create our N8N image.
Having a Jenkins pipeline later which will keep this patched every couple of weeks and an audit of previous images is also going to be useful.
The way to do this is to create a Dockerfile, the best description I’ve seen of this is a Dockerfile is a “virtual assistant” which has a set of commands to perform inside a container on your behalf.
##Grab Ubuntu base image from my repoFROM 192.168.40.43:8081/dockerimages/base_ubuntu_focal##Set working DirectoryWORKDIR /opt##Copy complete sources listCOPY sources.list /etc/apt/sources.list.d/##Update the imageRUN apt-get update && apt-get upgrade -y
How does this break down?
This is pretty readable stuff…
##Grab Ubuntu base image from my repoFROM 192.168.40.43:8081/dockerimages/base_ubuntu_focal
FROM where is the base image Docker will run from is being pulled from, this is the base image we created in the first stage.
##Set working DirectoryWORKDIR /opt
WORKDIR Which Directory should be the working directory, in this example its actually irrelevant because I’m not using a working directory.
##Copy complete sources listCOPY sources.list /etc/apt/sources.list.d/
COPY In the same folder as the docker file is a file called sources.list (below) which is a basic Ubuntu Focal sources.list, COPY will copy this file to the sources.list.d folder in the docker base image.
deb http://gb.archive.ubuntu.com/ubuntu focal main restricteddeb http://gb.archive.ubuntu.com/ubuntu focal-updates main restricteddeb http://gb.archive.ubuntu.com/ubuntu focal universedeb http://gb.archive.ubuntu.com/ubuntu focal-updates universedeb http://gb.archive.ubuntu.com/ubuntu focal multiversedeb http://gb.archive.ubuntu.com/ubuntu focal-updates multiversedeb http://gb.archive.ubuntu.com/ubuntu focal-backports main restricted universe multiversedeb http://gb.archive.ubuntu.com/ubuntu focal-security main restricteddeb http://gb.archive.ubuntu.com/ubuntu focal-security universedeb http://gb.archive.ubuntu.com/ubuntu focal-security multiverse
The sources.list on the base container created earlier is the first line only with no restricted repo.
##Update the imageRUN apt-get update && apt-get upgrade -y
RUN will run the command(s) within the container, in this example, I want to run a patching update. You may notice that there is no sudo used as everything within the default Docker setup is run as root.
Build the Patched base image
Then Run (the dot at the end specifies that there is a Dockerfile in this folder)
In the same folder as the Dockerfile and the sources.list files run the following command.
docker build -t 192.168.40.43:8081/dockerimages/base_ubuntu_focal_patched:latest .
Note the . at the end of the line, that’s not a type, it’s telling docker build to look for a Dockerfile in the same folder.
Once the build has complete (and it’s a verbose action) the generated docker image will be visible if you type
docker image ls
Will display
192.168.40.43:8081/dockerimages/base_ubuntu_focal_patched latest e39e825b65ff 2 days ago 717MB
You’ll note that the image size has increased from 322Mb to 717MB as patches have been added.
This image should be pushed to the ProGet Private docker repository
docker push 192.168.40.43:8081/dockerimages/base_ubuntu_focal_patched:latest
This will result in 2 images in the dockerimages repo, the base_ubuntu_local and the base_ubuntu_local_patched images.

If you want to jump onto the new patched image and access the bash shell then the following command can be run.
docker run -it 192.168.40.43:8081/dockerimages/base_ubuntu_focal_patched:latest /bin/bash
Create a Docker image which runs N8N
Dockerfile
##Grab Ubuntu base image from my repoFROM 192.168.40.43:8081/dockerimages/base_ubuntu_focal_patched:latest##Install latest NodeJS and NPMRUN apt-get install -y curlRUN curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -RUN apt-get install -y gcc g++ makeRUN apt-get install -y nodejs##Install N8N SoftwareRUN npm install -g npm@latest##OpenFirewall PortEXPOSE 5678##Install N8NRUN npm install n8n -g##Run N8NCMD "n8n"
Again, let’s break this down
##Grab Ubuntu base image from my repoFROM 192.168.40.43:8081/dockerimages/base_ubuntu_focal_patched:latest
FROM In this Dockerfile we use the recently patched docker image created in the last stage.
##Install latest NodeJS and NPMRUN apt-get install -y curlRUN curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -RUN apt-get install -y gcc g++ makeRUN apt-get install -y nodejs
RUN The NodeJS provided in the core Ubuntu Repos is (at the time of writing) far too old to un N8N, so the RUN directive will install curl, pull down a method of setting up the latest nodejs and npm and also install some prerequisites.
##Install N8N SoftwareRUN npm install -g npm@latest
RUN With the latest NodeJS and NPM installed this command just makes sure NPM is actually up to date.
##OpenFirewall PortEXPOSE 5678
EXPOSE This will “expose” the TCP Network port 5678 for the container. when using docker run the -p will map the port of the host to the exposed docker port.
##Install N8NRUN npm install n8n -g
RUN With NodeJS, NPM installed and up to date and the TCP Port exposed the run command is used to install N8N
##Run N8NCMD "n8n"
CMD I had to do a bit of research to find the difference between RUN and CMD and it appears
RUN is an image build step, the state of the container after a RUN
command will be committed to the container image. A Dockerfile can have many RUN
steps that layer on top of one another to build the image.
CMD is the command the container executes by default when you launch the built image. A Dockerfile will only use the final CMD
defined. The CMD
can be overridden when starting a container with docker run $image $other_command
.
So RUN is run multiple times and commits things to the docker image, CMD is used to execute the actual process that will run in the container.
Build the N8N Image
From the folder the Dockerfile is located in run.
docker build -t 192.168.86.43:8081/dockercontainers/library/n8n:latest .
This will verbosely run through, and a couple of times might look like an error is being generated (these are warnings, not errors)
Push this to the dockercontainers repo on ProGet
docker push 192.168.40.43:8081/dockercontainers/library/n8n:latest
The newly built container image is pushed to the private repo.

Run the N8N Docker Image
Having done all the hard work, it’s now possible to run the image and use N8N, there are two ways to run the docker container image, in a verbose mode which is good for testing and detached which will carry on running even if you close the shell window that launched the docker container image.
Run verbose
docker run -it --rm -d --name n8n -p 5678:5678 -v /home/david/.n8n:/root/.n8n 192.168.40.43:8081/dockercontainers/library/n8n:latest
This will verbosely run the output of the command on the screen
Run detached
docker run -it --rm -d --name n8n -p 5678:5678 -v /home/david/.n8n:/root/.n8n 192.168.40.43:8081/dockercontainers/library/n8n:latest
You’ll note the following
-p - Maps the hosts tcp/5678 port to the exposed port on the docker image-v - I've mapped the folder the image puts the data files in so N*N will run over restarts
If you open up the URL of the host the N8N Docker container was run-in
http://<ip address or DNS>:5678

What have we done?
- Created a Docker base image.
- Patched the base image.
- Installed N8N using the latest NodeJS and NPM
- Pushed all the images to a private ProGet Repository
- Run the N8N Image and opened the application
Some Notes
For troubleshooting here are some useful commands
Publish to Private repos/change tags
Above I’ve tagged the images as the latest, to change the tag on the image run to change the tag from latest to 1.0.0
docker tag n8n:1.0.0 192.168.40.43:8081/dockercontainers/n8n:1.0.0docker push 192.168.40.43:8081/dockercontainers/n8n:1.0.0
Hop into a running container and have a look around
If the Docker Run command seems to exit ok, but you can’t run the web apps,
docker ps
Make a note of the “container id” then run
docker exec -it <container_id_or_name> /bin/bash
This will drop you in the bash of the running container. you can install software and run troubleshooting to find the problem in the container from here.
Jenkins Pipeline
the above explains what is going on under the hood, you’ll want to run this regularly for patching, and bug fixes on different docker images. To do this there are several tools available Rundeck is another option. This tutorial however focuses on using Jenkins as an Automation Orchestrator.
The Jenkinsfile
Jenkins can be fed a Jenkinsfile which much like a Dockerfile is a set of commands to tell Jenkins what to do in order to Build or Automate something. Using Jenkinsfiles means that your code can be held in an SCM repository like Gitlab or Bitbucket and version controlled.
The alternative is to build the Jenkins pipeline within the Jenkins interface.

This is good for development and testing, however not good for production.
Some Reading
If you’ve never used Jenkins or Jenkinsfiles before, then I’d suggest having a read of this post I wrote in 2018
I found what was online a garbled chop shop of assumption and needed to get some clarity in my own mind as to how Jenkins Pipelines work it might help.
Need Setup In Jenkins before you start
Before any of this will work you’ll need the following setup in Jenkins
Plugins
Out of the box, Jenkins doesn’t have a clue what Docker is and uses plugins to resolve this I’d suggest installing the following plugins:




Even if they are not used now, they may help in the future.
Ensure Docker is restarted after the plugins are installed
Where is Docker?
Jenkins needs to know where the docker binary is installed when a docker related pipeline is executed
From the Home Dashboard head to Manage Jenkins -> Global Tool configuration and scroll down until you see Docker.

Fill in the blanks and click on Save
A remote Ubuntu node
As a rule, I try and keep the Jenkins server for only orchestrating the jobs on and I’d have a remote node running the actual pipeline. This stops the Jenkins server from getting overloaded and keeps things easy to manage.
As the process we are automating uses Ubuntu, I’d suggest installing Ubuntu server 20.04 in a VM with a minimum of 2Gb Ram, 50Gb Hard Disk and 2 CPU’s on a network accessible by the Jenkins server.
Once installed run:
sudo apt updatesudo apt dist-upgrade -y
Install Docker
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt update
apt-cache policy docker-ce
sudo apt install docker-ce
sudo systemctl restart docker
sudo systemctl enable docker
sudo usermod -aG docker ${USER}
Further information in the links below
Install Java
Jenkins (at the time of writing) only supports JRE 8 or JRE 11 so install the headless version with
sudo apt install openjdk-11-jre-headless
Create a Jenkins workspace folder
Jenkins needs a place to work, i tend to put this out of the home folder, you can put this anywhere however remember where you added it.
sudo mkdir /jenkinssudo chown -R ${USER}:${USER} /jenkins
(Optional) Install Passwordless SSH
You’ll need creds to connect to this remote machine, in this example I’ll use a username/password however you might want to (and probably should) use passwordless SSH.

Add a Node in Jenkins
From the Jenkins Dashboard click on Manage Jenkins -> Manage Nodes and Clouds

Click on New Node and give the node a name and select Permanent Agent

Click Save
In the Configuration screen ensure that the following is set
Setting | Detail | Notes |
Name | devubuntu | |
Description | Ubuntu Remote Docker Device | |
Number of Executors | 1 | |
Remote Root Directory | /jenkins | |
Labels | dev docker ubuntu | |
Usage | Only build jobs with label expressions matching the node | |
Launch Method | Launch Agents via SSH | |
Host | IP or DNS of Ubuntu Server | |
Credentials | See following notes | |
Host Key Verification | Non Verifying stratagy | Self Signed certs |
Availability | Keep Agent Online as much as possible |
For Credentials Click on Add -> Jenkins

Complete the popup with the method of connecting to your Ubuntu Server

For this example use Username with Password unless you already have passwordless SSH Working
Add a Username, Password and the ID which is how Jenkins will display the creds
Click on Save

Select the Creds you entered and click on Save
Click on Log and you should see similar output
[06/07/21 11:21:46] [SSH] Checking java version of java[06/07/21 11:21:46] [SSH] java -version returned 11.0.11.[06/07/21 11:21:46] [SSH] Starting sftp client.[06/07/21 11:21:46] [SSH] Copying latest remoting.jar...Source agent hash is F7B9C09212C05E6A48C6A52793BBFC04. Installed agent hash is F7B9C09212C05E6A48C6A52793BBFC04Verified agent jar. No update is necessary.Expanded the channel window size to 4MB[06/07/21 11:21:46] [SSH] Starting agent process: cd "/jenkins" && java -jar remoting.jar -workDir /jenkins -jar-cache /jenkins/remoting/jarCacheJun 07, 2021 10:21:47 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDirINFO: Using /jenkins/remoting as a remoting work directoryJun 07, 2021 10:21:47 AM org.jenkinsci.remoting.engine.WorkDirManager setupLoggingINFO: Both error and output logs will be printed to /jenkins/remoting<===[JENKINS REMOTING CAPACITY]===>channel startedRemoting version: 4.7This is a Unix agentWARNING: An illegal reflective access operation has occurredWARNING: Illegal reflective access by jenkins.slaves.StandardOutputSwapper$ChannelSwapper to constructor java.io.FileDescriptor(int)WARNING: Please consider reporting this to the maintainers of jenkins.slaves.StandardOutputSwapper$ChannelSwapperWARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operationsWARNING: All illegal access operations will be denied in a future releaseEvacuated stdoutAgent successfully connected and online
Specifically the last line
Agent successfully connected and online
And the new node should be listed under Nodes

If there are errors it’s one of 2 things
- Java 11 not installed
- Rights on /jenkins folder
What have we done?
- Added Docker Plugins to Jenkins
- Created a Remote Jenkins Ubuntu Node
- Installed Docker on the Remote Jenkins node
- Connected the node to the Jenkins server using SSH
Jenkinsfiles
The reading I’ve done over the years has lead me to keep my Jenkinsfiles as self-contained as possible, so they contain everything I need in the Jenkinsfile.
I have 3 Jenkins files in 3 separate projects in a group in Gitlab.com
Jenkinsfile – Create Base Image
pipeline {environment { imagename = "192.168.40.43:8081/dockerimages/library/base_ubuntu_focal" pushimage = "192.168.40.43:8081/dockerimages/library/base_ubuntu_focal:latest"}agent { label 'docker' }options { ansiColor('xterm')}stages { stage('CleanWorkspace') { steps { cleanWs() } } stage('GITLab Checkout') { steps { git branch: 'master', credentialsId: 'lancert', url: "git@gitlab.com:myrepo/docker_pipeline/docker_build_base_ubuntu.git"' } } stage('Build image') { steps { sh "sudo apt-get install -y debootstrap" sh "debootstrap focal focal > /dev/null" sh "tar -C focal -c . | docker import - 192.168.86.223:8081/dockerimages/base_ubuntu_focal" sh "docker push 192.168.86.223:8081/dockerimages/base_ubuntu_focal:latest" } } stage('Push image') { steps { withDockerRegistry([credentialsId: 'ProGet_david', url: 'http://192.168.40.43:8081']) { sh 'docker push $pushimage' } } }}}
This is broken down as follows
Environment Variables here are global to the whole Jenkinsfile, I’ve set the URL for the ProGet Docker Repo
environment { imagename = "192.168.40.43:8081/dockerimages/library/base_ubuntu_focal" pushimage = "192.168.40.43:8081/dockerimages/library/base_ubuntu_focal:latest"}
The Node this should run on is defined using the Docker label assigned to if
agent { label 'docker' }options { ansiColor('xterm')}

The Options section under agent has Jenkins display colour coding in its output and needs the following plugin installed.

The Stages refer to the green boxes seen on the Project page

I make a habit of cleaning up the /jenkins workspace for the project before I start as not doing so has caused debugging issues. I like to make sure this is idempotent.
stage('CleanWorkspace') { steps { cleanWs() } }
There is a checkout of all the GitLab code, this is a repeat of the first stage above, however, it’s in the code, and won’t do much.
stage('GITLab Checkout') { steps { git branch: 'master', credentialsId: 'lancert', url: "git@gitlab.com:myrepo/docker_pipeline/docker_build_base_ubuntu.git"' } }
The build stage in this instance is just a set of commands for Ubuntu to run locally.
stage('Build image') { steps { sh "sudo apt-get install -y debootstrap" sh "debootstrap focal focal > /dev/null" sh "tar -C focal -c . | docker import - 192.168.86.223:8081/dockerimages/base_ubuntu_focal" sh "docker push 192.168.86.223:8081/dockerimages/base_ubuntu_focal:latest" } }
I’ve not added a test here, and If I was I’d do a “docker image ls” and make sure my image was imported
Push the image up to the Proget server using withDockerRegistry If the URL isn’t added here then Jenkins will attempt an upload to docker.io
stage('Push image') { steps { withDockerRegistry([credentialsId: 'ProGet_david', url: 'http://192.168.40.43:8081']) { sh 'docker push $pushimage' } } }
Jenkinsfile – Patch Base Image
It’s possible to have this Jenkins file using the Jenkins Pipeline method described below to have this Pipeline watch the above pipeline and if there is a successful build then automatically run this Pipeline

Select Build after other projects are built and watch the first project.
pipeline {environment { imagename = "192.168.40.43:8081/dockerimages/library/base_ubuntu_focal_patched" pushimage = "192.168.40.43:8081/dockerimages/library/base_ubuntu_focal_patched:latest"}agent { label 'docker' }options { ansiColor('xterm')}stages { stage('CleanWorkspace') { steps { cleanWs() } } stage('GITLab Checkout') { steps { git branch: 'master', credentialsId: 'lancert', url: 'git@gitlab.com:myrepo/docker_pipeline/docker_patch_ubuntu.git' } } stage('Build image') { steps { script { updateme = docker.build imagename } } } stage('Test image') { steps { script { updateme.inside { sh 'echo "Tests passed"' } } } } stage('Push image') { steps { withDockerRegistry([credentialsId: 'ProGet_david', url: 'http://192.168.40.43:8081']) { sh 'docker push $pushimage' } } }}}
You’ll see that most of the format for the Jenkinsfile is the same using the same stages
The two differences are
The Build image stage
stage('Build image') { steps { script { updateme = docker.build imagename } } }
Using a variable updateme, the method docker.build will use the Dockerfile held in the same repository and folder as the Jenkinsfile and build it tagging it with the details of the variable imagename declared at the top of the Jenkinsfile.
The updateme variable is then used with the docker inside method in this test
stage('Test image') { steps { script { updateme.inside { sh 'echo "Tests passed"' } } } }
At this stage, Jenkins will spin up a copy of the Image build in the previous stage as a container and run the echo command.
These tests can be expanded to check the last update time or any other tests.
Once the test is complete Jenkins will take the container down and remove it.
It’s worth noting that in the last stage of this file there is a line
sh 'docker push $pushimage'
The variable pushimage is declared at the tip of the Jenkins file, because it is being used in the sh (shell) context it is displayed as $pushimage
Jenkinsfile – Install N8N
The final Jenkinsfile is the same as the last one except the variables imagename and push name have changed.
pipeline {environment { imagename = "192.168.40.43:8081/dockercontainers/n8n" pushimage = "192.168.40.43:8081/dockercontainers/n8n:latest"}agent { label 'docker' }options { ansiColor('xterm')}stages { stage('CleanWorkspace') { steps { cleanWs() } } stage('GITLab Checkout') { steps { git branch: 'master', credentialsId: 'lancert', url: 'git@gitlab.com:myrepo/docker_pipeline/build_n8n.git' } } stage('Build image') { steps { script { n8n = docker.build imagename } } } stage('Test image') { steps { script { n8n.inside { sh 'echo "Tests passed"' } } } } stage('Push image') { steps { withDockerRegistry([credentialsId: 'ProGet_david', url: 'http://192.168.40.43:8081']) { sh 'docker push $pushimage' } } }}}
What have we learnt here?
- Using Jenkinsfiles
- The format of a Jenkinsfile
- How Stages are displayed
- Using $variable with sh
- Using docker.inline for testing
Using a Jenkinsfile in Jenkins
We have 3 Jenkinsfiles, how are they used in Jenkins? They can be used in mainly Pipeline or MultiBranch pipeline, as the complexity is low this post will configure a pipeline to pull down the Jenkinsfiles.
From the Home Dashboard click on New Item

Select Pipeline and click on OK

Enter a Description, scroll down to Pipeline

Select
- Pipeline script from SCM
- SCM – Git
- Enter the Repo URL and select the credentials
- Choose the Branch (Leave as default if unsure)
Scroll down and make sure Script Path has Jenkinsfile in there

Click on Save
From the Job screen

Click Build Now
The Build will create the stages listed in the Jenkinsfile and run through them

Click on the Current top build number

Repeat this for the three Jenkins files.
Thoughts
There is a lot here, I’ve put this all down as notes for myself and hope they might help others. This is only just scratching the surface of what can be done using Jenkins, Docker and Automation.
Further Reading







