Home > Cluster > Creating a Red Hat Cluster: Part 2

Creating a Red Hat Cluster: Part 2

Print Friendly, PDF & Email

 

Here is the second article on how to to build a Red Hat/CentOS cluster. The “Part One of this article” was posted a couple of month ago, I’m sorry for the delay but I have been a bit busy at work and at home.  I can promise that Part 3 will not take a couple of month to post. I want to terminate this series on the cluster and post some new ideas that I have.  In this article, we continue our journey on how-to build our cluster and stay tune for Part 3 coming soon.

Installing the cluster software

If you are using Red Hat, you need to register your server in order to download new software or update. You also need to subscribe to the “Clustering” and “Cluster Storage” channel in order to install these groups of software. With CentOS, this is not needed since we can download these groups of software without registration. Let’s install the clustering software, by typing this command;

# yum groupinstall Clustering

Since we will be using GFS filesystem, we will need the “Cluster Storage” software group.

# yum groupinstall “Cluster Storage”

I also found out that this package is also needed by the cluster software, so let install it.

# yum install perl-Crypt-SSLeay

If you are using a 32 bits kernel and you have more than 4 GB of memory on it, you need to install the PAE kernel and GFS modules. This will ensure that you are using all the memory available on the server.

# yum install kmod-gnbd-PAE kmod-gfs-PAE kernel-PAE

Finally, let’s make sure that you have that the servers have the latest OS update.

# yum –y update

Setting the locking type for GFS filesystem

To use the GFS (Global File System) with the cluster you need to activate the GFS locking in the /etc/lvm/lvm.conf file. We need to change the “locking_type” variable from 0 to 3, to inform LVM that we will be dealing iwth GFS volume group and GFS filesystem. This command needs to be run on all the servers.

# grep -i locking_type /etc/lvm/lvm.conf
locking_type = 0
# lvmconf --enable-cluster
# grep -i locking_type /etc/lvm/lvm.conf
locking_type = 3

Making sure SELinux and firewall are disable

We do not want to deal with SELinux and the firewall in our cluster, so we will disable them. From the gnome desktop run the following the following command ;

# system-config-securitylevel
Disable SELinux

Disabling SELinux need a reboot

Disable Firewall

Disable Firewall

 

 

 

 

 

 

 

 

 

 

 

 

Activating cluster service

Let’s make sure that the cluster service are started each time the server is started.

Activate cluster services

 

 

 

 

 

We now we can start the cluster services manually or reboot the server.

Start cluster services

 

 

 

 

 

 

 

Starting cluster configuration

There are two tools you can use to configure and maintain your cluster. The first one is a web interface called “Conga” that require the installation of a agent on each node (ricci) and a centralize configuration center named (lucy).  If you want to use the web interface (Seem to be working a lot better now),  it is advisable to install the configuration center on a Linux server outside of the cluster. This was fairly new when I first created my first cluster and we decided to used the second interface name “Cluster Configuration”.  This tool can be started from any node within the cluster and does not require to install any additional software. To start the cluster configuration GUI, type the following command using the “root” account ;

Start cluster configuration GUI from the command line

Start cluster GUI from the command line

 

 

 

 

new_cluster_config_warning

new_cluster_config_warning

 

The first time you run the cluster configuration GUI, a warning message may display.  It is just informing us that the cluster configuration file “/etc/cluster/cluster.conf” was not found. Click on the “Create New Configuration” button and let’s move on.

 

 

 

Cluster general setting

Cluster general setting

 

  • Next we need to enter the name of our cluster, we have choose to name it “our_cluster”.  The name of a cluster cannot be change. The only way to change the name of the cluster is to create a new one, so choose it wisely.
  • We use the recommend Lock Manager (Dynamic Lock Manager), the GULM is depreciated.
  • The multicast address we used for the heartbeat will be “239.1.1.1”.
  • The usage of the “Quorum Disk” is outside the scope of this article. But basically, you define a small disk on the SAN that is shared among the nodes in the cluster and node status are regularly written to that disk. If a node haven’t updated its status for a period of time, it will be considered down and the cluster will then fence that node. If you are interested to use the “Quorum Disk” there is an article here that describe how to set it up.

 

 

 

 

Cluster Properties

Click on the image to enlarge

Cluster Properties

 

Now let’s check some of the default setting given to our cluster. Click on “Cluster” on the left hand side of the screen and then click on the “Edit Cluster Properties”.

The Configuration Version value is by default to 1 and automatically incremented each time you modify your cluster configuration.

The Post-Join Delay parameter is the number of seconds the fence daemon (fenced) waits before fencing a node after the node joins the fence domain. The Post-Join Delay default value is 3. A typical setting for Post-Join Delay is between 20 and 30 seconds, but can vary according to cluster and network performance.

The Post-Fail Delay parameter is the number of seconds the fence daemon (fenced) waits before fencing a node (a member of the fence domain) after the node has failed. The Post-Fail Delay default value is 0. Its value may be varied to suit cluster and network performance. CHANGE THE “POST FAIL DELAY” to 30 Seconds.

 

 

Adding our nodes to cluster

Insert our first node "bilbo"

 

To add node into the cluster let’s select “Cluster Nodes” on the upper left side of the screen and click on the “Add a Cluster Node” button. You will be then presented the “Node Properties” screen.  Enter the node name, the “Quorum Votes” and the name of the interface used for the “Multicast” (Heartbeat), in our case it is eth0. The avoid problem, I always used the first interface for the network cluster heartbeat.

  • Enter the name of the host name used for the heartbeat, for our first node it will be “hbbilbo.maison.ca. Remember that this name MUST be defined in our hosts file and in your DNS (if you have one).
  • Quorum is a voting algorithm used by the cluster manager. We say a cluster has ‘quorum’ if a majority of nodes are alive, communicating, and agree on the active cluster members. So in a thirteen-node cluster, quorum is only reached if seven or more nodes are communicating. If the seventh node dies, the cluster loses quorum and can no longer function. For our cluster we will leave the Quorum Votes to the default value of 1.

If we would have a two-node clusters, we would need to make a special exception to the quorum rules. There is a special setting “two_node” in the /etc/cluster.conf file that looks like this:  <cman expected_votes=”1″ two_node=”1″/>

 

Repeat operation for every node you want to include in the cluster

Click to enlarge.

Insert "gandalf" node in our cluster

Insert "gollum" node in our cluster

 

 

 

 

 

 

This article was getting quite long, so I decided to stop here, and continue in a another article.

We are almost there stay tune, the third one will be posted soon.

 

Part 1 – Creating a Linux ReadHat/CentOS cluster

Part 2 – Creating a Linux ReadHat/CentOS cluster

Part 3 – Creating a Linux ReadHat/CentOS cluster

Part 4 – Creating a Linux ReadHat/CentOS cluster

Part 5 – Creating a Linux ReadHat/CentOS cluster

 

Categories: Cluster