Archive

Posts Tagged ‘cluster’

Creating a Red Hat Cluster: Part 2

January 3rd, 2011 No comments

 

Here is the second article on how to to build a Red Hat/CentOS cluster. The “Part One of this article” was posted a couple of month ago, I’m sorry for the delay but I have been a bit busy at work and at home.  I can promise that Part 3 will not take a couple of month to post. I want to terminate this series on the cluster and post some new ideas that I have.  In this article, we continue our journey on how-to build our cluster and stay tune for Part 3 coming soon.

Installing the cluster software

If you are using Red Hat, you need to register your server in order to download new software or update. You also need to subscribe to the “Clustering” and “Cluster Storage” channel in order to install these groups of software. With CentOS, this is not needed since we can download these groups of software without registration. Let’s install the clustering software, by typing this command;

# yum groupinstall Clustering

Since we will be using GFS filesystem, we will need the “Cluster Storage” software group.

# yum groupinstall “Cluster Storage”

I also found out that this package is also needed by the cluster software, so let install it.

# yum install perl-Crypt-SSLeay

If you are using a 32 bits kernel and you have more than 4 GB of memory on it, you need to install the PAE kernel and GFS modules. This will ensure that you are using all the memory available on the server.

# yum install kmod-gnbd-PAE kmod-gfs-PAE kernel-PAE

Finally, let’s make sure that you have that the servers have the latest OS update.

# yum –y update

Setting the locking type for GFS filesystem

To use the GFS (Global File System) with the cluster you need to activate the GFS locking in the /etc/lvm/lvm.conf file. We need to change the “locking_type” variable from 0 to 3, to inform LVM that we will be dealing iwth GFS volume group and GFS filesystem. This command needs to be run on all the servers.

# grep -i locking_type /etc/lvm/lvm.conf
locking_type = 0
# lvmconf --enable-cluster
# grep -i locking_type /etc/lvm/lvm.conf
locking_type = 3

Read more…

Categories: Cluster

Creating a Red Hat Cluster: Part 1

November 8th, 2010 1 comment

This is the first of a series of articles that will demonstrate how to create a Linux cluster using Red Hat/CentOS  5.5 distribution. When we  created our first cluster at the office, we were searching for some Red Hat cluster setup information on the internet. To my surprise, we could not find a lot of  article or user experience on that topic. So I hope, this series of articles will benefit to the community of users trying or wanting to create their own cluster.

The cluster hardware

Our cluster will have 3 HP servers, they will all have 4GB of memory, 36GB of mirrored internal disks, one Qlogic fiber card connected to our SAN system, 2 network cards and for our fencing device we will use the on-board HP ILO (Integrated Light Out). This is my setup, yours does not need to be the same.  You do not need to use HP server to build a cluster, you do not need to have mirrored disks (although recommended) and you do not need to have to use a SAN infrastructure either (NFS share can also be used). One thing I would recommend is a fencing device, on the HP server there is a network port on the back of each server called “ILO”.  This will allow the cluster software to power on, power off or restart a server remotely. Red Hat cluster package allow you to use a lot of similar fencing devices. This part of the cluster is important because this will prevent at one point some nodes in the cluster to write to a non-shareable file-system and create data corruption. If you do not have a fencing device, you can always use the manual fencing method it works, but it not supported.

The Setup

Our cluster will be contains 3 servers, we will have 2 actives servers and one passive. For our example, we will have one server running an HTTP service, the second server will be an FTP server. The third server will be use as a fail-over server, if the first or second server have network, SAN, or hardware problem, the service it is running will move the third server.

Although not required, having a passive server offer some advantage, first it make sure that if one server have problem the passive server will be able to handle the load of any of your server. If we did  not have that passive server we would need to make sure that either of the server would be capable of handling the load of the two servers on one server.

Clustering environment offer some other advantage when time come to do hardware update on a server. Let say we need to add memory to the first server, we could move the HTTP service from one server to the passive node, add memory to the first server and then move back to the service on the original node when ready. Another advantage of having a passive server is that you can update the OS of your node one by one without affecting the service (if reboot is necessary).

So the name of our node will be “bilbo” the will host the http service, “gollum” the will host the FTP service and “gandalf” will be our passive node.

Click on image to view it enlarge.

As you can see above, each server use 3 network cards.

Read more…

Categories: Cluster