Multipule Web Nodes using GlusterFS with CentOS 6.6

Check to see if there are any trusted storage pools already configured.

# gluster peer status
Number of Peers: 0

Check you can establish communication from web01 to web02.

[root@glusterfs-web01 ~]# gluster peer probe web02.dummydomains.org.uk
peer probe: failed: Probe returned with unknown errno 107

This will fail if the peer isn’t listen or if the firewall is blocking communication on the following ports:

111 tcp
24007 tcp 
24008 tcp
24009 tcp

Here I am using Rackspace cloud networks to create a virtual private network of 192.168.10.0/24 and attach it to each of the web nodes. To ensure I connect using the private IP when I try to connect to web02.dummydomains.org.uk from web01.dummydomains.org.uk, I need to edit my hosts configuration file. This will take precedence over DNS, which would return my public IP address.

My hosts file on web01 now looks like this:


127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
2a00:1a48:7808:101:be76:4eff:fe08:9cec glusterfs-web01
162.13.183.215 glusterfs-web01
10.181.138.233 glusterfs-web01
192.168.10.1 glusterfs-web01
192.168.10.2 glusterfs-web02 web02.dummydomains.org.uk web02

While on web02, it looks like this:


127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
2a00:1a48:7808:101:be76:4eff:fe09:1b3b glusterfs-web02
10.181.140.163 glusterfs-web02
162.13.184.243 glusterfs-web02
192.168.10.2 glusterfs-web02
192.168.10.1 glusterfs-web01 web01.dummydomains.org.uk web01

Check you can use the hostname to connect to the correct (private) IP address. You can use ping for that.

[root@glusterfs-web01 ~]# ping -c2 web02 
PING glusterfs-web02 (192.168.10.2) 56(84) bytes of data.
64 bytes from glusterfs-web02 (192.168.10.2): icmp_seq=1 ttl=64 time=0.894 ms
64 bytes from glusterfs-web02 (192.168.10.2): icmp_seq=2 ttl=64 time=0.393 ms

--- glusterfs-web02 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.393/0.643/0.894/0.251 ms

You need to check this on both (or all) web nodes.

[root@glusterfs-web02 ~]# ping -c2 web01
PING glusterfs-web01 (192.168.10.1) 56(84) bytes of data.
64 bytes from glusterfs-web01 (192.168.10.1): icmp_seq=1 ttl=64 time=0.933 ms
64 bytes from glusterfs-web01 (192.168.10.1): icmp_seq=2 ttl=64 time=0.383 ms

--- glusterfs-web01 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.383/0.658/0.933/0.275 ms

Here we have to allow incoming connections by altering the iptables configuration on all web nodes. My network setup on web02 is shown below. The private network is on the eth2 interface.

[root@glusterfs-web02 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether bc:76:4e:09:1b:3b brd ff:ff:ff:ff:ff:ff
    inet 162.13.184.243/24 brd 162.13.184.255 scope global eth0
    inet6 2a00:1a48:7808:101:be76:4eff:fe09:1b3b/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::be76:4eff:fe09:1b3b/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether bc:76:4e:09:1b:3c brd ff:ff:ff:ff:ff:ff
    inet 10.181.140.163/19 brd 10.181.159.255 scope global eth1
    inet6 fe80::be76:4eff:fe09:1b3c/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether bc:76:4e:08:ea:0a brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.2/24 brd 192.168.10.255 scope global eth2
    inet6 fe80::be76:4eff:fe08:ea0a/64 scope link 
       valid_lft forever preferred_lft forever

And web02‘s iptables configuration currently looks like the below.

[root@glusterfs-web02 ~]# iptables -nvL --line-numbers
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1       12   824 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           ctstate RELATED,ESTABLISHED 
2        0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0           
3        0     0 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
4        0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           ctstate NEW tcp dpt:22 
5        0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1        0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain OUTPUT (policy ACCEPT 7 packets, 788 bytes)
num   pkts bytes target     prot opt in     out     source               destination

On web02 I insert an ACCEPT rule at line 4, before the REJECT rule. This rule allows all incomming connections on the eth2 interface.

# iptables -I INPUT 4 -i eth2 -p all -j ACCEPT

Now on web01, probing web02 returns a better result.

[root@glusterfs-web01 ~]# gluster peer probe web02.dummydomains.org.uk
peer probe: success.

Back on web02, make sure you save the firewal configuration.

# service iptables save
# service iptables reload

Now make the same changes on web01, so that web02 returns the same result.

[root@glusterfs-web02 ~]# gluster peer probe web01.dummydomains.org.uk
peer probe: success.

You might also want to use the peer status to check what things look like.

[root@glusterfs-web01 ~]# gluster peer status
Number of Peers: 1

Hostname: web02.dummydomains.org.uk
Uuid: 29ca7ff7-f19b-4844-89de-6356ca4b51ff
State: Peer in Cluster (Connected)

Don’t forget that gluster is also a shell – demonstrated from web02 below.

[root@glusterfs-web02 ~]# gluster
gluster> peer status
Number of Peers: 1

Hostname: 192.168.10.1
Uuid: 38fb3a93-133f-4588-95a0-5ec8cd5265e3
State: Peer in Cluster (Connected)
Other names:
web01.dummydomains.org.uk
gluster> exit

….or, better yet!

[root@glusterfs-web01 ~]# gluster pool list
UUID					Hostname                 	State
29ca7ff7-f19b-4844-89de-6356ca4b51ff	web02.dummydomains.org.uk	Connected 
38fb3a93-133f-4588-95a0-5ec8cd5265e3	localhost                	Connected

And again from web02.

[root@glusterfs-web02 ~]# gluster pool list
UUID					Hostname    	State
38fb3a93-133f-4588-95a0-5ec8cd5265e3	192.168.10.1	Connected 
29ca7ff7-f19b-4844-89de-6356ca4b51ff	localhost   	Connected

Create the Gluster volume with the following command. I had to use the force option or it complained about creating volumes on the root partition.

[root@glusterfs-web01 ~]# gluster volume create dummydomainsVol replica 2 transport tcp web01.dummydomains.org.uk:/data web02.dummydomains.org.uk:/data force
volume create: dummydomainsVol: success: please start the volume to access data

Then start the volume with.

[root@glusterfs-web01 ~]# gluster volume start dummydomainsVol
volume start: dummydomainsVol: success

Check the status from any node and you should see something similar to the below.

[root@glusterfs-web02 ~]# gluster volume info
 
Volume Name: dummydomainsVol
Type: Replicate
Volume ID: 4694564e-134d-4f85-9716-568a0a6f4156
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: web01.dummydomains.org.uk:/data
Brick2: web02.dummydomains.org.uk:/data
Options Reconfigured:
performance.readdir-ahead: on

Here we mount the gluster volume (on any node) to the /var/www/vhosts directory.

[root@glusterfs-web01 ~]# mount.glusterfs web02.dummydomains.org.uk:/dummydomainsVol /var/www/vhosts

Check the mount output.

[root@glusterfs-web01 ~]# mount | grep web02
web02.dummydomains.org.uk:/dummydomainsVol on /var/www/vhosts type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

Hopefully that worked. You now want to un-mount the volume and mount it using fstabs so it’s persistent across a reboot.

# umount -v /var/www/vhosts
# vi /etc/fstab
# mount -a
# mount | grep web02       
web02.dummydomains.org.uk:/dummydomainsVol on /var/www/vhosts type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

The contents of my fstab looks like the below.


[root@glusterfs-web01 ~]# cat /etc/fstab
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/xvda1 / ext3 defaults,noatime,barrier=0 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
#/dev/xvdc1 none swap sw 0 0
web02.dummydomains.org.uk:/dummydomainsVol /var/www/vhosts glusterfs defaults,_netdev 0 0

Once you’ve also mounted the volume from web02 – you’re good to test it!

I ran the below command on web01….

mkdir -v /var/www/vhosts/dummydomains.org.uk; for i in $(seq 1 20); do touch /var/www/vhosts/dummydomains.org.uk/web-file-$i.txt; done

…and then listed the following directory on web02.

[root@glusterfs-web02 ~]# ls -l /var/www/vhosts/dummydomains.org.uk/
total 0
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-10.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-11.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-12.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-13.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-14.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-15.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-16.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-17.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-18.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-19.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-1.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-20.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-2.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-3.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-4.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-5.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-6.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-7.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-8.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-9.txt

Nice!

Installing GlusterFS on CentOS 6.6

You will need to download the following repository file to the /etc/yum.repos.d/ directory before trying to install the glusterfs-server package.

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo
yum install glusterfs-server

It will pull-in a load of dependencies…

===========================================================================================================================================
 Package                                      Arch                  Version                            Repository                     Size
===========================================================================================================================================
Installing:
 glusterfs-server                             x86_64                3.7.2-3.el6                        glusterfs-epel                1.2 M
Installing for dependencies:
 device-mapper-event                          x86_64                1.02.90-2.el6_6.3                  updates                       122 k
 device-mapper-event-libs                     x86_64                1.02.90-2.el6_6.3                  updates                       116 k
 device-mapper-persistent-data                x86_64                0.3.2-1.el6                        base                          2.5 M
 glusterfs-cli                                x86_64                3.7.2-3.el6                        glusterfs-epel                155 k
 glusterfs-client-xlators                     x86_64                3.7.2-3.el6                        glusterfs-epel                919 k
 glusterfs-fuse                               x86_64                3.7.2-3.el6                        glusterfs-epel                119 k
 keyutils                                     x86_64                1.4-5.el6                          base                           39 k
 libevent                                     x86_64                1.4.13-4.el6                       base                           66 k
 libgssglue                                   x86_64                0.1-11.el6                         base                           23 k
 libtirpc                                     x86_64                0.2.1-10.el6                       base                           79 k
 lvm2                                         x86_64                2.02.111-2.el6_6.3                 updates                       817 k
 lvm2-libs                                    x86_64                2.02.111-2.el6_6.3                 updates                       901 k
 nfs-utils                                    x86_64                1:1.2.3-54.el6                     base                          326 k
 nfs-utils-lib                                x86_64                1.1.5-9.el6_6                      updates                        68 k
 pyxattr                                      x86_64                0.5.0-1.el6                        epel                           24 k
 rpcbind                                      x86_64                0.2.0-11.el6                       base                           51 k
 userspace-rcu                                x86_64                0.7.7-1.el6                        epel                           60 k
Updating for dependencies:
 glusterfs                                    x86_64                3.7.2-3.el6                        glusterfs-epel                416 k
 glusterfs-api                                x86_64                3.7.2-3.el6                        glusterfs-epel                 72 k
 glusterfs-libs                               x86_64                3.7.2-3.el6                        glusterfs-epel                318 k

Transaction Summary
===========================================================================================================================================
Install      18 Package(s)
Upgrade       3 Package(s)

Total download size: 8.3 M
Is this ok [y/N]: y

…Accept the GPG key imports you are alerted to and proceed with the installation.

warning: rpmts_HdrFromFdno: Header V4 RSA/SHA1 Signature, key ID 4ab22bb3: NOKEY
Retrieving key from http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key
Importing GPG key 0x4AB22BB3:
 Userid: "Gluster Packager <glusterpackager@download.gluster.org>"
 From  : http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key
Is this ok [y/N]: y
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
Importing GPG key 0x0608B895:
 Userid : EPEL (6) <epel@fedoraproject.org>
 Package: epel-release-6-8.noarch (@epel/6.6)
 From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
Is this ok [y/N]: y

Ensure the glusterfs daemon is set to start on boot and start the service.

chkconfig --levels 235 glusterd on
service glusterd start

You can check the status with….

[root@glusterfs-web01 ~]# /etc/init.d/glusterfsd status
glusterfsd is stopped
[root@glusterfs-web01 ~]# /etc/init.d/glusterd status  
glusterd (pid 6148) is running...

You can check the version with the following.

[root@glusterfs-web01 ~]# glusterfsd --version
glusterfs 3.7.2 built on Jun 23 2015 12:13:11
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc.
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.