Until Christmas 2021 my Homelab has been set up fairly consistent for about 2 years. I documented it fully here:

A Home Network setup
The home network used to be a family PC with an AOL connection, time however has moved on and it’s possible to have setups at home which rival some small businesses. I thought I’d share my current home network setup (well some of it) and how it all fits together


A New Year however brings in changes and I wanted to migrate what I have into the cloud and keep a small subset of servers locally on the home network.

A Hybrid solution was the way forward and this is what I’ve got set up.



As with all my posts, I cannot guarantee the spelling will be perfect, I know the grammar will annoy some, however, these posts are written as a reference to me, and I make them public as others might find them handy.

Oh, and if my spelling and grammar are your only takeaway from this… Keep it to yourself.


The core driver for the whole project was to get the hardware I’d been hosting at home shut down as much as possible and recycled to save electricity costs.

The first question is, which cloud?

There are many options from the hyperscalers AWS, Azure and Google. I looked at these, did some math and realised that they were going to work out expensive for my needs.

I started looking at Linode, Digital Ocean and Vultr and eventually after working through each platform made the choice to use Vultr.

It’s cheap, simple, has a working terraform module, and is quick to get boxes up and running.

Once the Cloud provider was set up the next challenge was how to get the Home LAN to talk to the Vultr LAN. Initially, I was looking for a point to point Wireguard VPN tunnel however ended up using Tailscale for this instead.

The final decision was what needed to be on the Vultr LAN and how was I going to monitor it?

The remainder of this post covers the end results


At a high level, this is what I’ve ended up with

So let’s break this down into its components.


Ubuntu – COST: Free

For the last few years, I’ve been using RHEL8 on the servers of my homelab. RHEL8 isn’t an option on the cloud provider I have been using and at that point, I had a selection of OS and went for the easiest option for my needs.

At this point in my life, it’s about east, supportability and keeping things working. Ubuntu servers support what I need them to, it’s got a good document structure and active community.

Landscape – COST: Free up to 10 clients

Using RHEL there is a really good centralised back end management system as part of the deal. I was wondering what Ubuntu did, the answer to that question is Landscape.

Landscape is the leading management tool to deploy, monitor and manage your Ubuntu servers.


Essentially Landscape is all about PAtching, for me this seems to be its core functionality. Servers are attached to Landscape using the following process

1. Update the repositories

Make sure your repositories are up-to-date by running the following command in a terminal:

sudo apt-get update

2. Install the client

Now landscape-client can be installed in one quick operation with the following command:

sudo apt-get install landscape-client

3. Register the computer

Now you can register the new computer with the following command, substituting the title of your computer where it says “My Web Server”. Your computer’s title can be any word or phrase that you want to use.

sudo landscape-config --computer-title "My Web Server" --account-name cunfzsz9 --registration-key 755252246242624

Alternatively running this program with no arguments will prompt you for the same information in an interactive wizard.

4. Accept the computer

Now that the machine is registered as a pending computer it can be accepted into your account. Log into Landscape with your user account information and click on the Pending Computers link. There you will see the machine you just registered, as illustrated in this screen shot:

Select the new machine and click Accept. This will complete the registration process and cause the new machine to exchange information with Landscape

Once connected patching can be automated using Landscape and its build in task functionality. Reboots schedule. Each machine will have facts about it pulled down, there is a basic monitoring function available.

Logging in with let you know what is going on with your systems.


Vultr – COST: Circa $50 a month (6 servers)


Choosing a hosting provider for your cloud project is a personal choice. There will be plenty reading this who will say a Hyperscaler should have been used, others would suggest Digital Ocean. This boiled down to getting $100 credit for a month. I used that to work out how this all worked and fire up 6 Virtual machines.

Vulture offer what I’d suggest for a homelab setup are the basics needed from a hosting platform. I’ve used the Cloud Compute option which provides a selection of Operating Systems Hosting in a different set of locations

Creating a new cloud compute device within the WebGui is very simple

Choose a location

Choose an OS

I’ve set up all my servers using Ubuntu 20.04

Choose a S/M/L Tshirt sie for the build

There are then a selection of additional features

Virtual Private Clouds is available when the OS is set up in the same location and is an internal NAT address (All the servers come with publically accessible IP’s).

Your own SSH keys can be used as well with the machine.

Once you’ve set up all your options the quantity and cost will be displayed and you can click on Deploy Now.

The process is pretty quick to bring up a machine. The machine is connected to the Internet with a public IP, SSH is enabled and at least in the Ubuntu Images UFW is enabled by default.

The Virtual Private Cloud Address/NAT Address is handy if you’ve got services talking to each other internally like Databases or Pipelines. It’s advisable to have one server setup as a jumpbox, allow SSH externally from your own public IP Address and disable SSH using UFW on the public IP of all other machines.

I found the $12 a month VM pretty much covered all the servers I’ve been running.

As well as the above manual option There is also a Vultr Terraform Provider

GitHub – vultr/terraform-provider-vultr: Terraform Vultr provider
Terraform Vultr provider. Contribute to vultr/terraform-provider-vultr development by creating an account on GitHub.


This works as expected and enables building VM’s using code.

Over the course of the month, the cost has been $40 for 6 x 2Gb machines

If you’d like the same $100 deal I got feel free to use this referral code.


It’s cheap, it’s easy to use, it’s basic and it works.


Tailscale – COST: Free

Tailscale is a zero config VPN for building secure networks. Install on any device in minutes. Remote access from any network or physical location.


When you put your services on multiple locations, you start to work in the world of cross-site routing, or linking sites to each other and making there work seamlessly together.

When I started looking into this my first inclination was to link the two sites (home and Vultr) using a Wireguard VPN Tunnel. the driver for this. It’s usually more simple to set up than OpenVPN. However yet again I fell foul of an issue I have at home because of a Double Nat environment I have set up.

I went back to the drawing board and asked myself what was I looking to do?

I wanted all the servers at home and Vultr to be able to communicate with each other.

After a bit of googling, I came across Tailscale, the answer to this and many other questions.

What is Tailscale?

Tailscale is a zero-config VPN.

A solution where an agent is installed on every device you want on a VPN mesh. This agent sets up a new network endpoint/card on that device and assigns a Tailscale managed IP unique to your login which is static (it’s assigned by DHCP, but doesn’t change) to each device running the agent.

Traditionally networking between sites would look like this

With Tailscale it looks like this

Because each device is running on the same IP network irrelevant of location, it’s essentially one large network.

To get this all working login and create an account at Tailscale.com


curl -fsSL https://tailscale.com/install.sh | sh

On a headless server, this will install and prompt you to run

tailscale up

This will on the initial run ask for a login to be run, provide a link, which is used to connect the device to your account and you are done.

There will be a new NIC on your server and you’ll be able to ping any other Tailscale devices you have registered on their Tailscale IP

 tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq_codel state UNKNOWN group default qlen 500link/none inet scope global tailscale0   valid_lft forever preferred_lft forever

Tailscale runs as a service tailscaled and when installed it’s set to enabled so the Tailscale network comes up on boot.

I can use the Android app on my phone or the Chromebook to access the Tailscale network.

Tailscale Interface

The web interface

This is a functional interface and has the obvious items in it. the ones which stand out are:

A list of servers attached to your subscription/Tailscale network.

Because there is only me on the network I’m not worried about access controls, its good to see that they can however be configured per user.

The DNS was interesting, they have something called MagicDNS which at the time of writing was a list made up of the hostnames given by the machines in the machine list.

I however installed pi-hole on an Ubuntu machine at home and pointed the Global Nameserver for Tailscale to that address. I added .tailscale as a local domain and added them as local DNS entries on the pi-hole. Once this was done I pointed Tailscale at the Tailscale IP of the pi-hole server and had my own local DNS.


So I’ve ended up with a mesh VPN network across 2 networks, backed by SSO and MFA, with my own local DNS Server while using no routers or VPN’s of my own self-hosting Should I want to add additional locations like Family hoses to my Tailscale network I can do this.No additional routers needed.

On the Vultr servers, UFW has the Firewall setup accordingly, so I ended up having nat interface ONLY on the SQL Database and the Webserver and Talk server so all the traffic stayed in the Vultr ecosystem (reduced costs) every other machine has ssh enabled only on the Tailscale LAN

This does mean at home to administer my network I need to be attached to Tailscale on my Linux Desktop or Chromebook. Services however are consumed on public or home nat IP’s if they are public and Tailscale if they are an admin interface.

This is one of those products when you get it, you have an epiphany and until then you keep asking “but it’s just a VPN isn’t it?”.


Netdata – COST: Free

Netdata – Monitor everything in real time for free with Netdata
Open-source, distributed, real-time, performance and health monitoring for systems and applications. Instantly diagnose slowdowns and anomalies in your infrastructure with thousands of metrics, interactive visualizations, and insightful health alarms.


As I’m moving things to SaaS and less self-hosted it made sense to monitor the servers using the excellent netdata.

Create War Rooms to group servers

Run a netdata.sh command line to install and claim the server


I then point all the alerts to a private Discord channel (as I do with almost everything else)

Netdata then provides a consolidated view of the whole server setup (in that War Room) or a server by server display.

Freshping – COST: Free

Freshping | Free Reliable Website Monitoring software by Freshworks
Free reliable website uptime & availability monitoring. Monitor 50 URLs every minute for Free.


For monitoring public-facing services like this website, Nextcloud, a chat server I’m using the Free Tier of Frshping to ensure the service is up and available.

All Alerts for Freshping, go into a Discord channel via Zapier

Again, nice and easy.. alerts in one place.


Mistborn – COST: Free

Stormblest / mistborn
Mistborn is your own virtual private cloud platform and WebUI that manages self hosted services, and secures them with firewall, Wireguard VPN w/ PiHole-DNSCrypt, and IP filtering. Optional…


Why, if I’m using Tailscale do I need another VPN?

This is while travelling, it’s a VPN direct into a home on an isolated network, which I can provide access to either family in foreign countries or myself while travelling.

The setup and what it does are very well documented at the link above, and it does far more than I’m using it for. As a public-facing edge server, it is well put together and secure. You need to attach to its VPN to manage the system.

It installs from a script and runs on most versions of Linux

Recommended System Specifications:

Use Case Description RAM Hard Disk
Bare bones WireGuard, Pihole (no Cockpit, no extra services) 2 GB 15 GB
Default Bare bones + Cockpit 2 GB+ 15 GB
Low-resource services Default + Bitwarden, Tor, Syncthing 4 GB 20 GB
High-resource services Default + Jitsi, Nextcloud, Jellyfin, Rocket.Chat, Home Assistant, OnlyOffice 6 GB+ 25 GB+
SIEM Default + Wazuh + Extras 16 GB+ 100 GB+

One line direct installation

wget -O install.sh https://gitlab.com/cyber5k/mistborn/-/raw/master/scripts/install.sh && sudo -E bash ./install.sh


Clone repository and examine files first

git clone https://gitlab.com/cyber5k/mistborn.git# Examine files if desiredsudo -E bash ./mistborn/scripts/install.sh

Get default admin WireGuard profile wait 1 minute after “Mistborn Installed” message

sudo mistborn-cli getconf 

Connect via WireGuard then visit http://home.mistborn

For more information, see the Installation section on the git page.

Once installed setting up other Wireguard users is done in the WebUI listed above.

The DNS setup is managed on the server using PiHole (this at the time of writing was actually an old version, but still worked)

As well as Wireguard and Pihole Mistborn is designed to have public-facing services run on them. It does this by installing docker images which are pretty well integrated into Mistborn once installed by the WebUI.

As out of the box solutions go, I particularly like the simplicity of setting up remote users for VPN access. I’ve been able to get family onboarded quickly by sending over a generated config.

Applications – Self-hosted

Mattermost – COST: Free

Mattermost | Open Source Collaboration for Developers
Mattermost is a secure, open source platform for communication, collaboration, and workflow orchestration across tools and teams.


I’ve run a Rocket.chat server for 2 years, installing Mattermost and maintaining it has been an absolute dream in comparison.

Onboarding users has been pretty painless, it’s given me the same features we used on the Rocket.chat server and it looks a lot prettier.

I run it on Vultr, on a machine with 1 CPU and 2Gb Ram, and it’s not peaked out, slowed down or given me any issues.

Ghost – COST: Free

How to install Ghost, the official guide
Everything you need to know about working with the Ghost professional publishing platform.


Running off Node.js Ghost CRM has been the platform this blog has run on for 3 years. I was able to quickly make a backup of the old server running on RHEL8 at home. Install a new server on Ubuntu 20.04 and once installed restore the backup and be running after a DNS Change.

As a platform, I like it because it’s clean and simple, it takes up little resources and has some nice themes.

Axigen – COST: Free (5 domains)

Free Mail Server | Axigen
Axigen Free Mail Server is a great alternative to open source. Runs on Linux and Windows and offers free email server users with calendars, WebMail, and mobile access.


For some hosting their own mail server brings up memories of postfix, SASL setups, spam list management and all the thins SaaS was set up for.

For me, it’s a simple install, a wizard and I’m done.

Axigen provides up to 5 domains free which suits me. The only thing I’ve had to do slightly different with it generates a LetEncrypt Cert which contains each of the domains I host on these servers as the SSL side of the setup for mail needs them all to be in the same cert. This isn’t well documented.

Other than this little adventure, Axigen has a nice web interface for management and mail. Supports all the usual mail protocols and does what its supposed to.

PiHole – COST: Free

1. Install a supported operating system You can run Pi-hole in a container, or deploy it directly to a supported operating system via our automated installer.


I’ve either been using Bind with Webmin or AdGuard over the lat few years as PiHole didn’t play nice with RHEL8 when i last tried it.

I can see why it’s so popular, another example like Mistborn of quick, simple software which installs and does a job.

As well as the Pi-Hole I run on the VPN Access point, I have an internal server that links to the Tailscale setup and the Google Wifi DHCP.

Jenkins – COST: Free

Jenkins – an open source automation server which enables developers around the world to reliably build, test, and deploy their software


Sure there are cooler, more trendy alternatives to run pipelines for home automation. Circle-CI or Gitlab runners. Sometimes however its just easier running what you know.

I can install Jenkins, be set up with my pipelines (which as basically Jenkins running Cron Jobs) in minutes.

Applications – SaaS

Gitlab – COST: Free

Iterate faster, innovate together
GitLab’s DevOps platform is a single application for unparalleled collaboration, visibility, and development velocity. Learn more here!


The reason for this choice was a pretty simple one personally, I’ve been using a self-hosted Gitlab for years. I’m not a huge user of most of the features it or any of its competitors use as I mainly just use it as a git repository (so I’m not using the DevOps tooling).

While it’s free, and I can use it, it made sense to pull out all my projects from my self-hosted git. Then upload them to the SaaS version. One less thing to host and now I have a script that will pull the data out should any of my needs change.

Nextcloud – COST:£10 a month

Managed Nextcloud Hosting | IONOS by 1&1
Get secure cloud storage and collaboration tools for your team. IONOS by 1&1 takes care of hosting, updates and backups – you focus on your projects. Start now!


I’ve been using Nextlloud since it forked from Owncloud many years ago.

The problem I’ve always had with Nextcloud and many of the things I’ve self-hosted is that I have a tendency to tear it apart and rebuild it.

I decided on Nextcloud as I use it a lot as a primary data store to host it at one of the recommended suppliers. IONOS. Given the choice again, I would probably (and more than likely will) not host with them. Their support is terrible, Freshping reports the server down quite often and they don’t run the latest or even publish an upgrade plan to get to the latest. It kinda just happens when someone proactive is on shift.

This is the hosting provider NOT the product, the product is great.

StackOverflow for Teams – COST: Free

Stack Overflow for Teams Pricing Pricing for Teams of all sizes
We have pricing plans that will fit the needs of your team.


The last question was as I’m learning, scripting and finding answers where do I keep these? Historically I’ve been using Guru. That’s felt a bit stale so I’ve started using StackOverflow for Teams as my knowledge base.

Stack Overflow for Teams is now free forever for up to 50 users
Stack Overflow for Teams, our collaboration platform for building a knowledge base inside your organization, is now free.



I’ve learned a lot, the primary thing is there is a cost to doing anything in the cloud and you should be aware of that long term cost before you start. Its been nice to drop the servers at home and repurpose some of the hardware and donate the rest.

While I love the self-hosted movement, and there are some things which I would always prefer to self-host, using the SasS option for things like Gitlab and Netdata does make life a lot easier.

By davidfield

Tech Entusiast