Creating a Red Hat Cluster: Part 5

April 16th, 2011 22 comments

Welcome back to LINternUX, for our last article of this series on how to build a working Red Hat cluster. So far we have a working cluster, but it only move the IP from server to server. In this article, we will put in place everything so that we have an FTP and a web service that will be fully redundant within our cluster. In our previous article, we have create a GFS filesystem under the mount point “/cadmin”, this is where we will put our scripts, configuration files and log used for our cluster. The content of the “/cadmin” filesystem can be downloaded here, it include all the directories structure and scripts used in our cluster articles. After this article, you will have a fully configured cluster, running an ftp and a web service. We will have a lot to do, so let’s begin.

 

FTP prerequisite

We need to make sure that the ftp server “vsftpd” is installed on every server in our cluster. You can check if it is installed by typing the following command ;

root@gandalf:~# rpm -q vsftpd
vsftpd-2.0.5-16.el5_5.1
root@gandalf:~#

If is not installed, we need to run the following command to instal it on the servers where it’s not installed ;

root@bilbo:~# yum install vsftpd

We must make sure the vsftpd is not started and doesn’t start upon reboot. To do so use the following commands on all servers;

root@bilbo:~# service vsftpd stop
Shutting down vsftpd:                                      [FAILED]
root@bilbo:~# chkconfig vsftpd off

Read more…

Categories: Cluster

Creating a Red Hat Cluster: Part 4

April 3rd, 2011 No comments

Welcome back to LINternUX, where we continue the creation of our cluster. By now you should have a working cluster running an ftp service and a web service. Although the service are created, our ftp and web service are not really running yet. In this article we will create a GFS filesystem that will allows us to share data between nodes. In the next and last article we’ll finalise the cluster by completing our ftp and web service so they really work. We will also show you how to manually move service from one server to another. So we still have some work to do, so let’s start right away.

 

Adding a SAN disk to our servers

The Linux operating system is installed on the internal disks for each of our server. We will now add a SAN disk that will be visible be each of our server. I assume here that your SAN and your Brocade switch are configure accordingly. Explaining how to set up the SAN and the Brocade switch is not in the scope of this article. But I think that you get the idea that the new disk must be visible by every node in our cluster. In the example below we already have a SAN disk (sda) with one partition (sda1) on it. Adding a disk to the server, can be done (live) without any interruption of service, if you follow the steps below. I would suggest you practice on a test server, to become familiar with the procedure.

Before we add a disk, let’s see what are the visible disks on the system, by looking at the /proc/partitions file. We can see that we already have a disk (sda) with one partition on it. So the new disk that we’re going to add, should be seen as “sdb”.

root@gollum~# grep sd /proc/partitions
8     0  104857920 sda
8     1  104856223 sda1

Let’s rescan the SCSI bus by typing the command below. This command must be run on each of the server within the cluster. Here, we have only one HBA (Host Base Adapter)  card connected to the SAN on each server. If you have a second HBA, you need to run the same command for the second HBA, but replace the “host0” by “host1”.

root@gollum~#  echo “- – -” > /sys/class/scsi_host/host0/scan
root@gandalf~#  echo “- – -” > /sys/class/scsi_host/host0/scan
root@bilbo~#     echo “- – -” > /sys/class/scsi_host/host0/scan

Let’s see if we have some new disk(s) that were detected (sdb) (check each servers)

root@gollum~# grep sd /proc/partitions
8     0  104857920 sda
8     1  104856223 sda1
8    16   15728640 sdb

Read more…

Categories: Cluster

Creating a Red Hat Cluster: Part 3

March 19th, 2011 4 comments

Here is the third of a series of article describing how to create a Linux Red Hat/CentOS cluster. At the end of this article, we will have a working cluster, all that will be left to do is the creation of the GFS filesystem and the scripts that will stop, start and give a status of our ftp and web services. You can refer to “Part1 , “Part 2” and our network diagram before reading this article, but for now let’s move on and continue our journey into building our cluster.

Defining the fencing devices

The fencing device is use by the cluster software to power-off the node when it is considered in problem. We need to define a fencing device for all the nodes that we defined in the previous article.

In our cluster, the fencing device used is the HP ILO interface. Select “the “HP ILO Device” from the pull down menu and enter the device information needed to connect to it. If you would like to see what are the fencing device supported by Red Hat cluster, you can consult the fencing FAQ on this page.

You could use manual fence for testing purpose but it is not supported in a production environment, since manual intervention is required to fence a server.

  • We choose to prefix the fencing device name with a lowercase “f_”, followed by the name assigned to its IP in /etc/hosts.
  • The login name to access the device.
  • The password used to authenticate to the device.
  • The host name as defined is our host file used to connect to the device.

 

Repeat the operation for each server in the cluster.

 

 

 

 

 

 

 

 

 

 

Read more…

Categories: Cluster