The home network used to be a family PC with an AOL connection, time however has moved on and it’s possible to have setups at home which rival some small businesses.
I thought I’d share my current home network setup (well some of it) and how it all fits together
This is about the 5th iteration of this setup with previous versions being based on VMware, Docker and native installs. The hardware I use is not rack-mounted or desktop servers they are old repurposed laptops, small MicroPC’s because I’m looking for low power usage and I’m a network of 2 users so I don’t need fast, only up and using the laptops specifically means I can treat the battery power as a kind of UPS.
There are some who will find this amazing, there are others who will think I’m as mad as a hatter and made totally the wrong choices everywhere, that’s the internet.
I’m open to suggestions if the conversation is an open one held in a grown-up way. If you can’t do that then I’ll ignore you. I appreciate there are better ways of doing some of this, however, you need to remember this post “works for me” and might provide others with a spark of enthusiasm.
This post is about my hardware and software choices with a little about configuration however I’m not going to go into the full details about my security setup or IP address scheme.
Living in a newbuild I’m lucky to have a Fibre to the Premises (FTTP) connection giving me 1Gb down, 500 up speeds. This has a (single) external static IP address assigned to it.
I’ve also got a backup 4G router which is set up as a failover device, if the connection dies on the Fibre (which to be fair it never has) the traffic routes out using the 4G. This is on a PAYG unlimited sim which gives me about 30Mb down, 5 Mb up and that’s fine as an “able to use the internet” failover solution.
I don’t have the inward bound setup for self-hosted servers (yet) so they will fail.
The connection from the FTTP box and the internal network uses the Mikrotik RB760iGS Routerboard.
I chose this because the Asus home router was topping at 300Mbs download on the internet connection. This small low power box was able to deliver the full 1Gps download provided on the line.
As well as being a hefty little router the Routeboard’s RouterOS is a very full-featured if somewhat awkwardly laid out interface. It has bells and whistles I’ve not seen on many other devices at this price range.
This allows a first-line firewall to be put in place at the point of entry and some good logging which can be forwarded to the log aggregator and mapped out.
Inside the network, there are 2 devices plugged into this Routerboard
The Mikrotik Routerboard hAP ac2
This Routerboard provides a segregated network and wifi at home at 2.4ghz and 5ghz
This is my home smart device LAN, it provides access over WiFi for various home smart devices, it’s firewalled off from the core home network as running Wireshark on the network shows a large amount of phone home traffic so I have put all these devices on their own network for a little bit of safety with some firewalling allowing point to point access to the core network on some ports.
Google Wifi Mesh
The second network is the Google Wifi mesh covering all the floors and rooms in the house and the shed at the end of the garden.
A wireless mesh is created when multiple mesh routers can communicate with each other
done through the house this ensures each Google Wifi puck is able to deliver a strong wifi signal and internet access.
I replaced a series of powerline adaptors (devices that extend your network over your houses power supply) with the mesh network and while I have generally been happy with what Google is providing this is a consumer product so there is little that can be done setup wise more than change the DNS address and DHCP range.
they work by having a primary puck plugged into the Router Board so it can see the internet and then joining each additional puck to a mesh, the software works out what the new puck and see, what its distance from the other pucks is, how the signal strength is etc.
For what it is it’s a good product, quick and easy to set up, updates itself, integrates with the Google Home app and I don’t need to do much with it, however, it does ensure that in the toilet at the point furthest away from the Mikrotik RouterBoard I still get good WiFi (very much a first world problem)
So at a very high level, this is how my home setup communicates with the internet…
The Virtual Servers
Let’s head straight into the Home Lab..
With a 1Gb ethernet 12 port switch plugged into one of the Google Mesh pucks the communication between the Office and the Puck is ethernet, from the puck to the internet is over wifi.
As I stated in the preamble this homelab is made up of a ragtag bunch of laptops that I’ve bought off eBay at stupidly low prices and retrofitted as needed.
- Dell XPS 13 – 250Gb SSD, 16Gb Ram, 17 Processor – I actually bought this new but a drop the day after the warranty ran out means the fan squeaks and the screen has a minor crack in it.
- Lenovo Thinkpad 420 – 256Gb SSD – 8 Gb Ram, 15 processor – Purchased the Laptop for £100 off eBay and upgraded it for about £300
- MacBook Pro 2015 15″ – 128Gb SSD, 16Gb Ram, I7 processor – Another ebay purchase, £200 it won’t actually run OSX, but it runs ubuntu server just fine.
- 2 x Gigabrix Boxes – 1 is n i7, the other and i5 processor, 64Gb SSD and 8Gb Ram each – Donated to the cause a long time ago.
I’ve also got for centralised storage a QNAP NAS with 3 x 4TB HDD and 1 x 512Gb SSD acting as a cache device providing 12Tb of centralised Raid 1 storage over iSCSI.
There is enough grunt here to run what I need with a little extra to spin up servers for testing when I need to.
6 of these 5 servers (Dell, Lenovo and Gigabrix’s) and setup to run Proxmox as a virtualisation cluster.
Proxmox is the Hyper-V, Vmware, Xenserver you’ve never heard of which provides out of the box at the free tier more features than you’ll need (to start with) and would usually need to start scaling up and paying for on those other platforms.
I ran into it many years ago when the other options all needed a fat client to operate properly Proxmox did everything via a web browser (or cli for 1 in 1000 tasks) which made life far simpler on my Chromebook
Since then I swear by this solution which will create either virtual machines or LXD containers within the GUI.
Proxmox is based on Debian, and does have a paid enterprise tier, however 1 nag screen when logging in is all you’ll see when using it and as already mentioned things like clustering, Ceph, iSCSI, Backups, HA Failover are all supported out of the box in the free tier. In general, the support is pretty good on the forums for the product.
Updates are done using Debian’s APT package manager (which I’veautomated using Ansible/Jenkins)
Creating a Proxmox cluster provides the user with the need to log in to only one of the boxes Proxmox is installed on and be able to manage any of the machines in the cluster.
Viewing the console of a virtual machine is handled using the HTML5 driven SPICE interface. This again means that no proprietary plugins are needed to be installed on the management console.
For the mobile hungry there is also a limited Proxmox official android app (not sure about iPhone I don’t use one).. or the Aprox app which I prefer to use on the Chromebook or android tablets. Personal preference it just feels nicer.
Neither app will provide a full feature set, they will however let you stop or start machines, attach to the console and see CPU/RAM usage.
The final thing I’ll say about the software is it’s pretty lightweight, the Debian install is minimal and it takes up very few of the underlying resources a big complaint I always had with VMWare or HyperV.
Each cluster node on my Proxmox cluster has access over ISCSI to the QNAP NAS box so I can create VM’s on the centralised storage and migrate them about quickly and simply if I need to.
Servers – RedHat
The observant among us will notice all the Red Hat logos on the diagram. when i started doing this it was all Ubuntu which I will go on record as stating if you’re new to this, it’s the OS you should be using, there is a LOT of guidance and support for it as a product. My decision to move to RHEL8 was driven mainly by the place I was working at the time and made easier by Redhat’s handling of post CENTOS8 and the Red Hat developer programme providing access to 16 free RHEL servers which can be used in home lab environments as well.
What I’ve found using RHEL 8 over the last few years is good documentation well-supported software. Being enterprise-level software you don’t get the bleeding edge and latest versions of things out of the box however you do get tried, tested and stable which I learned very quickly was preferable to falling over every few minutes
There are some other useful reasons for using RHEL
When you build a RHEL machine either in the setup GUI or later from the command line you’ll attach the server to the RH Subscription manager to provide updates and access to RedHat repositories.
The Subscription Manager however is more than just a watch list, its a CMDB of sorts for your RedHat servers which can, using ansible code, be utilised to do many wonderful things.
The System list is a gateway to a huge amount of information on each installed server
Clicking on any of these registered devices will bring back a lot of details (which are all Ansible accessible) about the server.
Again registered either from the command line or this time the webpage is Insights, or are your servers patched up to date and if not what problems will you have?
Providing a graphical breakdown of which systems need which patches and more importantly why they need them, potential mitigations or workarounds which might be put in place.
As I patch my servers weekly I find the feedback from this invaluable to ensure that there are no gotchas or a minimal number of them.
Will lead on this now-deleted server to some concise information around the issue
If you want to you can write code to mitigate this issue on this and any future services that may have the same issue.
Unlike some OS, RedHat updates are fully tested before release to ensure there are no issues. While I’m all for patching servers as quickly as possible I want to know that the patches and updates have been tested properly and documented. the RhEL subscription provides this in spades…
I’m able to know what patches are going onto which systems and if needed pull that single patch on a single server.
All this might seem like overkill for a home environment and that’s probably a spit on analysis. However, I’ve learned by getting my fingers burnt that with some systems like this web server, an errant patch of node.js has the potential to take out the whole server.
As your home lab grows its good practice to start managing it as properly as you can.
Servers – OpnSense
While I’ll run applications on Redhat, I like to run my core services on OpnSense, its a secure, stable, BSD back platform with a web interface for 99.9% of tasks.
OPNsense is an open-source, FreeBSD-based firewall and routing software developed by Deciso, a company in the Netherlands that makes hardware and sells support packages for OPNsense. It is a fork of pfSense, which in turn was forked from m0n0wall, which was built on FreeBSD. It was launched in January 2015. When m0n0wall closed down in February 2015 its creator, Manuel Kasper, referred its developer community to OPNsense. OPNsense has a web-based interface and can be used on the x86-64 platform. Along with acting as a firewall, it has traffic shaping, load balancing, and virtual private network capabilities, and others can be added via plugins
OpnSense for me is about Wireguard, DNS, DHCP and Intrusion detection using the various supported plugins however there are a whole heap of things this server can do. Running on BSD it’s very lightweight as well supporting what I need it to on 2Gb Ram 2 x vCPUs and a 40Gb HDD.
My original primary driver was to have a pretty GUI help manage Wireguard
This then turned into running my internal DNS (unbound) and DHCP for the Google Wifi mesh ensuring that the servers could be set to DHCP and provided the same IP with each reboot which was a bit flaky (now better) on the Google Mesh devices at the time.
Servers – Ubuntu on Mac
The Last OS I have running is on my 2015 MacBook Pro. This laptop doesn’t run macOS at all, it could run the latest version however I think there is an issue with the RAM as it crashes out on Boot each time.
So I tied running RHEL8 on the device and that didn’t work too well with the version I tried, things kept crashing.
At this point, I went back to the old faithful Ubuntu 20.04 which I’ve run as a desktop OS on Macs in various roles. This installed perfectly and lets me run some specific Docker services on it.
I have over time standardised on where possible a single OS, and only used OpnSense and Ubuntu because they are both easier to use post installation than RedHat (sure I could Webmin). the 16 device deal on Redhat is really good and if it gets turned off I’ll drop it and find another server OS (probably OpenSuse to run.
The original driver for all of this was to learn about some core system applications in a safe home environment. Over time that learning has moved into using the applications to manage the setup.
Graylog is a log aggregator, almost everything you run produces log files, output which tells you what is going on with the software for example type the following on RedHat
sudo ls -l /var/log
A Linux box will show various linux logs. lots of the applications here also produce comprehensive logs.
When you have a problem you’re trying to troubleshoot going from service to service trying to find out what is going on is a huge overhead.
Most servers and systems have the ability to forward their logs off the server the service is running on to an external Syslog server which at its core is what Graylog is receiving logs directly from servers or from other Syslog servers.
Where Graylog provides value add is the ability through a web front end to search through the logs and pull data out.
Either as a system as a whole or on a server by server basis, the search facility can be freeform or using a set of search criteria. These searches can also be saved and referred back to later or have notifications sent out if they are picked up by the system.
Personally, I’ve found for alerting purposes the information I get from Syslog to be much more informative than traditional monitoring solutions if the notifications are set up correctly.
I covered the install and some setup here:
While Gitlab is a publically available service that I also use, I like to host my own local GitLab instance. I keep the scripts and code which I run to maintain the local system on this internal Gitlab server.
I’ve also created a rather Heath Robinson system that ensures daily that the code and updates on the local repository are synced up to a private repo on the gitlab.com server.
If you’ve not used git before it’s at its core a version-controlled file storage system for code and configs used by teams. It was originally designed for development teams to be able to work on the same code without breaking working code.
Hosting code and configs on a git server at home is useful to ensure your automation or even backups are version controlled and provide a simple method of moving back to known working versions (which you can tag) if you do an upgrade of a setup and it stops working.
Check_MK is a monitoring solution based on the infamous Nagios platform, as with services like Icinga and Opsview, Check_MK has wrapped its own interface and extended the workflows and usage of the original Nagios core to provide a pretty useful monitoring solution.
It’s important to know what’s going on on your home network and even more so if there are going to be issues, let you know BEFORE they occur lest you find yourself explaining to your less technically inclined significant other just why they can’t watch Netflix this evening.
To monitor each of your hosts there are options ranging from Agents which will pull the most information down and can be customized with additional monitoring capability like adding Docker or Proxmox plugins into the agent (referred to as “baking”). Agents will scan the local device (Windows, Linux RPM/Deb etc) and report back to the CheckMK Server.
If installing an Agent isn’t an option then SNMPv1,2 and 3 are all supported (SNMPv3 with Auth and Priv, Priv or Auth).
Once communication is established information on Services is pulled into CheckMK which can then either be left as default or customised to cope with your environment.
The interface and terminology used in CheckMK can be a bit strange (WATO) and it is set by default to be polling each minute which some may not like. I have however found the support forum to be very helpful when I’ve got myself confused when setting this all up.
And once set up, notifications are handled by email default or using Slack/Discord as well is supported (my preferred method)
I’d suggest even if this takes a little while to setup, its a good learning experience which will hold well in any sysadmin role and a combination of CheckMK and Graylog will provide you with lots of information when you do need to troubleshoot an issue.
At present Jenkins is my automation server, it’s a centralised server that will run scripts in most languages locally or on remote servers via a Java Jenkins agent which is used to communicate availability back to the Jenkins Servers.
I started using Jenkins many years ago as an alternative to running CRON tasks and needing to manage them across servers, I’d have Jenkins orchestrate the timed tasks centrally and was able to quickly see if a job had failed and work out what the issue was from the Web Interface. this alone saved hours of relentless fiddling around trying to debug issues on that system.
The current workflow involves having the Jenkins Agent run on a handful of the RHEL servers and use Jenkins either with the simple wizard-like WebGUI or the more complex Groovy code it uses to create workflows. The workflow will generally involve communicating with an agent on a timed event, this will then pull down the required script on that Agent server from the local GitLab. The script will then be executed, it might be a backup, an update or a certificate renewal for example.
Once complete be that success or failure, it will then clean up the local working directory ready for the next run.
This is all displayed within Jenkins, each run is displayed in full and a status provided which is something the CheckMK Jenkins plugin will pick up on and report if the two are set up.
Like so much software there is a learning curve that comes with this and I’ve blogged a few times of examples of using Jenkins to get things done
There are various methods of controlling workflows each one builds on the next.
The other alternative I would look at is Rundeck
Which I’ve outlined in this post.
Put simply it’s this Blog server. Ghost is a CRM much like WordPress which provides a self-hosted option. I moved over to it a couple of years ago when a Redditor got really narky that I was using Medium and cited some compelling reasons why I shouldn’t.
Not being a fan of WordPress, I find it trying to be all things to all people and there’s too much going on I was pointed to Ghost CRM.
Running on Node.js and NPM the server is very lightweight, it runs off MySQL/MariaDB for Data storage. It’s themeable as well and there are plenty of free themes.
Because of its clean interface, it’s easy to use and embedded content is also pretty simple.
My Blog is a single person affair, Ghost also has the facility to provide groups and monetize content if you so wish.
If I have one issue with it as a platform it’s the lack of Mobile support, there was an Android Ghost app that stopped working when the API was updated from 2 to 3.
Like Graylog and CheckMK, Open-AudIT is all about information about your hosts, it is a Content Management Database (CMDb) that goes out and scans the networks you have set up for devices using either SSH, WinRM, SNMP or other credentials you set up to see and gain access to the device.
Once scanned you’ll have a definitive list of machines that are on that subnet, and if access was granted that list will display as much data as it could pick up to define that system from CPU, Ram through Motherboard statistics, OS and Software installed on the device.
Where this becomes really useful is when using tools like ansible where its possible to pull information out of the CMDB to be the data which defines which devices to run a script on.
The initial setup while a bit time consuming is Gui driven and really easy.
The free version of the software only provides visibility of 20 devices so when you start scanning for all the servers and devices on the network it will possibly top 20 devices pretty quickly.
It’s often the case that you’ll follow a howto and end up setting up multiple SQL databases all over the place, possibly on each server. While there is nothing wrong with this, it makes it far more manageable to have a central database and point the things that need it to that database.
MariaDB is maintained in the RHEL repositories and kept up to date and as such this is a central DB which my production services utilise.
If I had the space I’d have this set up in a failover cluster, but I don’t have the space, so it’s a single node.
As a good example of multiple services on this list linking up, I use Jenkins to backup the production databases nightly to the QNAP NAS which is mounted as an NFS share on the DB server. This is monitored in both Graylog and CheckMK which will send out notifications on failure and the surrounding logs to a channel on Discord.
Ansible is my go-to Automation scripting language, I’ve tried Chef, Puppet and SaltStack of the 4 this is the one my brain works best with. I’m not a developer, I’ma sysadmin who needs to use code to do his job, having learned bash the transition over to Ansible (and no I’m not just using shell with ansible trapped around it) has been far easier for me as the logic works in my head and the documentation is pretty good with examples as well.
The majority of the things I run from Jenkins using the built-in bash plugin are done so from the single jumpbox which has /etc/ansible/hosts setup listing all the servers which can have Ansible run on them because the SSH is set up correctly.
As an aside, at the time of writing, I’m working on a Jenkins script which pulls the Ansible capable boxes out of the Open-Audit and populates the Ansible hosts file with the appropriate groups and details to provide a dynamic hosts file.
As an example of using Jenkins and Ansible together, when I build a new Redhat machine on Proxmox from a template, I have a Jenkins pipeline that will run on all the Ansible hosts and the new one and ensure they are set up the same way, updated, repos added, sudo setup, installs core software, setups up NTP and DNS correctly, make sure SELinux is disabled and the device is added to the RHEL Inventory page amongst other things.
This saves me about 40 minutes of install time and ensures that the server is consistent build each time.
I’ve run many a home mail server over the years from following the Howtoforge Postfix, Dovecot, SpamAssassin guides, to the Mail in the box and various derivatives of, I’ve tried using Zimbra and even at one point self-hosted Exchange 2010…
Axigen is the first of these that just works and I can leave alone and it does what a mail server does.
This is commercial software that has a free version that supplies all the features for up to 5 mail accounts.
Providing Mail and Calendar services (and allegedly Collaboration, which I’ve never used) the Axigen model is aimed squarely at MSP’s and big corp types so it is a fully-featured mail server, fully WebGUI administered and that also comes with a big slant towards security as well. there is some integration with LetEncrypt for the secure IMAP and SMTP services which might not work automatically depending on your setup but can be worked around manually.
I run 3 mail domains off this with 4Gb ram and 2vCPU and never had an issue with it.
Then finally there is my VPN, as we have already covered its Wireguard run on OpnSense.
So why use Wireguard over somethingmore traditional like OpenVPN? To quote TechCrunch
WireGuard’s developer, security researcher Jason A. Donenfeld, began work on the protocol in 2016. Originally developed for Linux, it’s now also available on Windows, Mac, Android and iOS.
One major advantage of WireGuard is its simplicity. While OpenVPN and IKEv2 require hundreds of thousands of lines of code, WireGuard works with under 5,000, and that has all kinds of benefits.
Fewer bugs and security vulnerabilities, for instance. Reduced CPU usage. Faster connection times. And it’s much better suited for routers and mobile devices that don’t have desktop levels of computing power.
Cryptography is another highlight, with WireGuard using state-of-the-art protocols such as Curve25519, ChaCha20, Poly1305 and BLAKE2.
For me, the choice is based on a few reasons
1) The connectivity is Public/Private key based and there are no usernames or passwords involved. Users are not good at passwords at the best of times…
2) Wireguard is far lighter than OpenVPN, it requires less processing and in the tests, I’ve run when there is a connection to the VPN the network speed is faster when running Wireguard.
3) A newer product with less code should mean fewer vulnerabilities and quick fixes when things are found.
4) Reconnects are quick, and this is important. I can connect and if needed reconnect to the VPN in seconds which means I can leave the client running, put the machine to sleep and upon wake as soon as there is a network connection the VPN connects.
If I had a single issue with the client, I’d love for there to be an option to automatically connect Wireguard when connecting to an unknown WiFi network.
This is where the “fun” begins.. Previously I was running a Kubernetes cluster which I documented in https://tech.davidfield.co.uk/tag/zero-to-code/ the whole system is not resource heavy which cased me a problem because there is quite a bit of underlying resource needed to run a K8 cluster properly.
For this reason I spun up a Docker swarm across 2 Redhat and an Ubuntu nodes. this however wasn’t without its challenges.
Out of the box Redhat 8 no longer supports Docker, RedHat will point you to Podman.
Now I’m sure Podman is a worthy, more secure replacement which no doubt I will migrate to. During this setup however I wanted a dimple Docker swarm so needed to use the CentOS Docker repository.
I like having the swarm cluster if only for failover purposes of the public facing services.
With this setup I’m running the following services
While its tightlyimportant to understand how docker works from the command line I don’t belive this should be a barrier to entry for any software. Portainer for me is the perfect WebGui for managing your Docker server or Portainer cluster.
The interface provides a clear picture of what is going on within docker, quick access to images, containers, services and then logs and command prompts on containers. The ability to start, stop and restart containers is there.
Portainer can be installed in many different ways, on this cluster I’ve got the Container/Agent setup
Ah Rocket Chat, much like Axigen I’ve tried a few self-hosted chat apps from IRC, Jitsi to Mattermost etc. for me Rocket chat is THE chat server. Its well supported with both feature and bug updates.
I’m using this mainly as a text chat tool with a group of friends, its setup ringfenced. However, I’ve implemented it at a couple of businesses and it works perfectly for them
As well as chat, Jitsi-meet (either public or self-hosted) or BigBlueButton is supported for Video calls, and this works with an Outlook plugin as well.
I use Jenkins to backup the server daily by outputting the MongoDBfiles to a shared drive…
The only problem which Irks me with it is its connection to Android, the Android app works perfectly, however, there is a notification limit set and we can easily overrun that. I’m too tight to pay for the next level because I consider it to be too expensive however have found a manual workaround.
KASM is a remote VDI solution based on Docker (that than a virtual machine) and provides remote Desktops out of the Box of Ubuntu and CentOS as well as a whole heap of useful apps.
Again there are both Comercial and Community versions of KASM, and I’m running the single Server community version on the Ubuntu loaded MacBook Pro.
As well as having access to the service over the network, doing so via a web browser is handy when you’re running a Chromebook. Not needing to install a plugin or fat client is always appreciated.
There is a longer setup and admin article on KASM here which covers the software in greater detail.
The final 4 containers are for media management, google them if you want to know what they do.
There’s a lot going on here, none of it is perfect, because its all about learning. In an ideal lottery winning world I’d like to have 2 servers equally powered for full filover of all the services. The reality however is I don’t need that. An occasional bout of downtime every 4 months for a reboot, and clean up is workable downtime. I’ve put logging and monitoring in place and while sometimes the Gigabrix boxes sound like a Jumbo Jet taking off its in another room..
The Office Desk
An office desk should be setup to be usable, I don’t follow the god like cabling, keeping it tidy, i’ve been the one unclpping tie clips and having stuck cables in desks too much in my life.
I’m not going to cover much more than a few highlights here
Lenovo Video Wall
I bought the Lenovo a while back because its got a projector built in, and it did what it needed to do during a long travel session. Plex, wireguard and a projector on a tablet in chap hotel rooms with no TV..
The tablet then sat in a draw, its not upgradable and some newer apps won’t run on it. One app that did was the Yi Home app
I’ve got a bunch of Yi cameras all over the house for home security, they are reliable, cheap, have night vision and for £10 a month everytime they detect movement record to the cloud and store for a month.
The software which runs on android has a Live video wall
so the tablet is now stuck to the wall using some sticky velcro strips and provides me with a view of the outward facing feeds at home.
Nvidia Shield TV
The 42″ TV on the wall has the Nvidia Shield TV plugged into one of its 3 HDMI ports (the Chromebook is in one of the other ports, a stock chromecast the other)
The NVidia TV is a fascinating device it was released in 2015 the year i bought mine. and its still receiving Android updates, interface updates and still works great. I cannot thing of another Android device which can after 6 years say the same thing.
Its still on sale as well, and fustratingly its still one of the best Android TV experiences.
Samsung Ultra Wide
If ever there was a vanity purchase this was it, bought refurbished and faultless, while working from home during the pandemic and in the post pandemic hybrid eara, it turns out that this amount of screen real estate is amazing for working with a windows based OS.
Also watching videos on the ultrawide is pretty cool
The Living Room
Comparativly speaking the living room is an oasis of simplicity, the core tech item is the Google TV with chromecast
I like this itteration on one of the best TV devices ever produced the Chromecast. Adding an Android TV screen to navigate around TV Viewing apps like Plex, BBC Iplayer or Netflix on top of the most future feeling thing ever Casting to a Chromecast..
Viewing on a Samsung 4K 60hz TV the small device copes ith everything i’ve thrown at it.
A recent update also provided access to Google Staia which is now playable on the TV, and trust me, as a very casual gamer, this is a great addition.
Sound is supplied by a Sonos Beam soundbar
In our living room, i don’t think i’ve had this above 5, it gives great sound and I’ll be adding the behind seat wifi speakers to this setup soon.
The soundbar has access to Google Assistant and via the Sonos Android app many music services which is great for asking it to “play pop music” tosomething similar.
Out on the Cloud
So whats cloud facing?
Via a set of firewalls and other jumps the static IP I have is able to provide self hosted services like this blog, wireguard and rocket chat public facing.
Some other services are the RedHat platform which as discussed earlier acts as a CMDB and security check for the redhat servers.
I run Nextcloud offsite as well, the main reason for this is 1 its cheap, 2 i keep pulling down boxes and bringing new ones up and Nextcloud often got torn down. The data backup is important to me so I put Nextcloud on a dedicated external server.
I’ve got a discord server setup, i do chat to a few people on it, I use it for the TWiT channel mainly and for about 6 months its been where ALL my alerts go.
Usually you’d have Alerts go to email, I’m trying to cut down what does to my mailboxes and set about having all my alerts go into appropriate private channels on my Discord presence.
There’s a post here about how to do this if you’re interested
Then finally there are the smart devices.
I wrote about this earlier, boy are they chatty. Hue Lights, Chinese Lights, LED strips, Cameras, Thermostats, pressure sensors they all want to talk back to home often.
Because of this I bought the Routerboard hAP/Wifi device and have ALL the smart devices segregated and have a small security setup on a very old Intel Nuc
Running Security Onion and Wireshark with PCap capture enabled
Between Suricata and the other tools it provides its a fascinating experience to see what the IoT is doing on a home lan..
So there it is, laid bare the things I’m happy talking about on my home network. Some of this might be of interest some will see very strange to some users. I’ve built this up over a while and its had several itterations and will probably have several more as I’m already planning to move from Jenkins to rundeck because of work.