Introduction

I've been looking for projects to do with OpenSUSE and clustering popped up, which then lead to this post on how to setup a multi node cluster on OpenSUSE using Hawk2 a project which is backed by the SUSE Enterprise project and provides a nice web front end. In order to get this working I also needed to learn a bit about iSCSI setup in OpenSUSE for which i used FreeNAS as a backend iSCSI Server.

Disclaimer

*Remember when reading this these are my personal opinions which mainly come from 20 years of hands on Linux experience not suggestion, here say or third party input. I’m not asking you to agree with me in any shape or form, I'm writing this as a personal journey not as an effort to start a flame war over which distro, desktop or package management tool is the best. This is just one guys journey not a personal a affront against your beliefs

Also I know I can't spell and my grammer is terrible, you don't need to tell me.*

Lab Notes

I've put these instructions together mainly to remind me of the steps I did to get to the point of having a working cluster. I've supplied the link to areas where I have copied the notes from (history shows that links break and disappear)

Freenas ISCSI Setup


Hawk needs block storage so an NFS mount won't do to assist in the centralised location for data and applications to run. I could have done this step using OpenSUSE as well within Yast it's possible to setup iSCSI storage.  However I wanted to see how to do this in Freenas 11.3. I created 2 iSCSI targets with 2 additional disks mounted in the Virtual box machine

These instructions are from: http://blog.jetserver.net/freenas-creating-iscsi-storage

Step one –  Enable iSCSI Service

Navigate to services > Enable ISCSI service “start on boot” > click “start now” to start the service

Step two – Create Zvol in an existing Volume(Dataset)

Navigate to Storage > Pools > Add Zvol

Fill the zvol name & comment box > zvol size in Gib > select any of compression method > select Block size > click on “Add zvol”

Step three – iSCSI configuration

1.Create iSCSI Portal

Navigate Sharing > Block (iSCSI) > Portals > Add portal (The portal may exist if it has already been configured)

2.Create Initiators for iSCSI

Configuring Initiator, To define which systems can connect to this iSCSI

Navigate through Sharing > Block (iSCSI) > Initiators > Add Initiators.

While creating an iSCSI initiator you can leave the keyword “ALL”, this will allow connecting from any clients to this iSCSI share.

Note that it is recommended to set initiator IQN. You can see  some examples on how to get the IQN from the client machine here – Xenserver exmaple, CentOS example.

Authorized network: define a network address who will be able to access this share.

3.Create Target for iSCSI

Target is a combination of portals, valid initiators, and authentication methods.

Create a Target by navigating  Sharing > Block (iSCSI) > Targets > Add Target, as shown in below image.

Target Name: Select a name for your target
Target Alias: optional user-friendly name
Portal Group ID: select the portal group that you created
Initiator Group ID: select the initiator group id that you created in the previous step.

4. Create Extents

iSCSI targets used to define resources to share with clients. There are two types of extents: device and file

Device extents – virtual storage access to zvols or physical devices like a disk.

File extents – virtual storage access to file.

5. Associate an extent to a target

Sharing > Block (iSCSI) > Associated Targets > Add Target/Extent.

Select the items you have created in the previous steps.

That’s it !

OpenSUSE iSCSI client setup

Now I have my iSCSI target setup with 2 disks the OpenSUSE servers need setting up to mount the disks. This needs to be run on all OpenSUSE servers being used as part of the cluster.

First install the iSCSI client software

zypper in yast2-iscsi-client

Confirm which software packages were installed

rpm -qa | grep scsi


This should show

yast2-iscsi-client-4.1.7-lp151.1.1.noarch
open-iscsi-2.0.876-lp151.13.6.1.x86_64
libopeniscsiusr0_2_0-2.0.876-lp151.13.6.1.x86_64
iscsiuio-0.7.8.2-lp151.13.6.1.x86_64

Confirm the OpenSUSE server is able to see the iSCSI targets, the ip after -p is the IP of your Freenas Server

iscsiadm -m discovery -t st -p 10.10.10.15


The result of this command should show the iSCSI targets on the Freenas server

10.10.10.15:3260,-1 iqn.2005-10.org.freenas.ctl:iscsifiles
10.10.10.15:3260,-1 iqn.2005-10.org.freenas.ctl:iscsiportal


As a check prior to mounting we can confirm which Block devices actually get mounted later

lsblk

will show

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 18G 0 disk
├─sda1 8:1 0 8M 0 part
└─sda2 8:2 0 18G 0 part /
sr0 11:0 1 3.8G 0 rom

This shows the current available block devices.

To mount the available iSCSI targets in this example against the FreeNAS server the following command which -P is the IP of the FreeNAS server and -1 is the first iSCSI target

iscsiadm -m node -P 10.10.10.15:3260,-1 iqn.2005-10.org.freenas.ctl:iscsifiles -l


The following output should be logged

Logging in to [iface: default, target: iqn.2005-10.org.freenas.ctl:iscsifiles, portal: 10.10.10.15,3260]
Logging in to [iface: default, target: iqn.2005-10.org.freenas.ctl:iscsiportal, portal: 10.10.10.15,3260]
Login to [iface: default, target: iqn.2005-10.org.freenas.ctl:iscsifiles, portal: 10.10.10.15,3260] successful.
Login to [iface: default, target: iqn.2005-10.org.freenas.ctl:iscsiportal, portal: 10.10.10.15,3260] successful.

If the lsblk command is re run

lsblk

The mounted iSCSI targets should be displayed in this example as SDB and SDC

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 18G 0 disk
├─sda1 8:1 0 8M 0 part
└─sda2 8:2 0 18G 0 part /
sdb 8:16 0 5G 0 disk
sdc 8:32 0 5G 0 disk

sr0 11:0 1 3.8G 0 rom

Re run this command set on all the OpenSUSE servers in the cluster

Install Hawk2 Server


Hawk is available in the OpenSUSE Repositories and can be installed using the command

zypper in hawk2

Setup the first node in the cluster

The primary node in the cluster now needs to be initialised, this set of commands only need to be run on the first node setup, the other nodes use a different command

Initialise the cluster

crm cluster init

Its possible on a stock server install of OpenSUSE Leap the following error comes up

WARNING: No watchdog device found. If SBD is used, the cluster will be unable to start without a watchdog.

To mitigate this in the test environment the software watchdog can be loaded

modprobe softdog

Run the command again

crm cluster init


There will now be a set of questions asked, in the most part the default answer should suffice, I've deviated with the questions below.

Do you wish to use SBD (y/n)? y
Path to storage device (e.g. /dev/disk/by-id/...), or "none", use ";" as separator for multi path []/dev/disk/by-id/scsi-1FreeNAS_iSCSI_Disk_08002720c81900
Virtual ip 10.10.10.100

The Firewall ports need to be opened

firewall-cmd --add-port=7630/tcp --permanent
firewall-cmd --reload


The Primary node is now setup

Install Hawk2 Node


For each further node on the cluster we need to join the server to to the cluster

First start the Watchdog

modprobe softdog

Next Join the cluster

crm cluster join

You will be asked for the IP address of an existing node, from which configuration will be copied.  If you have not already configured passwordless ssh between nodes, you will be prompted for the root password of the existing node.


The following questions will be asked.

IP address or hostname of existing node (e.g.: 192.168.1.1) []10.10.10.11
Retrieving SSH keys - This may prompt for root@10.10.10.11:
Password:
One new SSH key installed
Configuring csync2...done
Merging known_hosts
Probing for new partitions...done
To see cluster status, open:
https://10.10.10.12:7630/
Log in with username 'hacluster', password 'linux'
WARNING: You should change the hacluster password to something more secure!
Waiting for cluster.....done
Reloading cluster configuration...done
Done (log saved to /var/log/ha-cluster-bootstrap.log)
Hawk cluster interface is now running.

The Firewall ports need to be opened

firewall-cmd --add-port=7630/tcp --permanent
firewall-cmd --reload

Repeat this on all the cluster nodes

HAWK2 admin webpage


The Cluster is now up and running and can be accessed on

https://10.10.10.12:7630/

Log in with username 'hacluster', password 'linux'

What can you do with this?

There will be a follow up post around setting up a service with Hawk2, the interface has a set of predefined Wizards for which the HA Proxy will be the one I look to setup as a Failover 3 server HAProxy cluster with possibly an NGINX website as an example

There is a good video here showing the features of the service

REFERENCE URLS

ClusterLabs/hawk
A web-based GUI for managing and monitoring the Pacemaker High-Availability cluster resource manager - ClusterLabs/hawk
Configuring and Managing Cluster Resources with Hawk2 | Administration Guide | SUSE Linux Enterprise High Availability Extension 15 SP1
To configure and manage cluster resources, either use Hawk2, or the crm shell (crmsh) command line utility. If you upgrade from an earlier version of SUSE® Linux Enterprise High Availability Extension where Hawk was installed, the package will be replaced with the current version, Hawk2. Hawk2′s use…

https://hawk-ui.github.io/