Check to see if there are any trusted storage pools already configured.
# gluster peer status Number of Peers: 0
Check you can establish communication from web01 to web02.
[root@glusterfs-web01 ~]# gluster peer probe web02.dummydomains.org.uk peer probe: failed: Probe returned with unknown errno 107
This will fail if the peer isn’t listen or if the firewall is blocking communication on the following ports:
111 tcp 24007 tcp 24008 tcp 24009 tcp
Here I am using Rackspace cloud networks to create a virtual private network of 192.168.10.0/24
and attach it to each of the web nodes. To ensure I connect using the private IP when I try to connect to web02.dummydomains.org.uk
from web01.dummydomains.org.uk
, I need to edit my hosts
configuration file. This will take precedence over DNS, which would return my public IP address.
My hosts file on web01
now looks like this:
192.168.10.2 glusterfs-web02 web02.dummydomains.org.uk web02
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
2a00:1a48:7808:101:be76:4eff:fe08:9cec glusterfs-web01
162.13.183.215 glusterfs-web01
10.181.138.233 glusterfs-web01
192.168.10.1 glusterfs-web01
While on web02
, it looks like this:
192.168.10.1 glusterfs-web01 web01.dummydomains.org.uk web01
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
2a00:1a48:7808:101:be76:4eff:fe09:1b3b glusterfs-web02
10.181.140.163 glusterfs-web02
162.13.184.243 glusterfs-web02
192.168.10.2 glusterfs-web02
Check you can use the hostname to connect to the correct (private) IP address. You can use ping
for that.
[root@glusterfs-web01 ~]# ping -c2 web02 PING glusterfs-web02 (192.168.10.2) 56(84) bytes of data. 64 bytes from glusterfs-web02 (192.168.10.2): icmp_seq=1 ttl=64 time=0.894 ms 64 bytes from glusterfs-web02 (192.168.10.2): icmp_seq=2 ttl=64 time=0.393 ms --- glusterfs-web02 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.393/0.643/0.894/0.251 ms
You need to check this on both (or all) web nodes.
[root@glusterfs-web02 ~]# ping -c2 web01 PING glusterfs-web01 (192.168.10.1) 56(84) bytes of data. 64 bytes from glusterfs-web01 (192.168.10.1): icmp_seq=1 ttl=64 time=0.933 ms 64 bytes from glusterfs-web01 (192.168.10.1): icmp_seq=2 ttl=64 time=0.383 ms --- glusterfs-web01 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.383/0.658/0.933/0.275 ms
Here we have to allow incoming connections by altering the iptables
configuration on all web nodes. My network setup on web02
is shown below. The private network is on the eth2
interface.
[root@glusterfs-web02 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether bc:76:4e:09:1b:3b brd ff:ff:ff:ff:ff:ff inet 162.13.184.243/24 brd 162.13.184.255 scope global eth0 inet6 2a00:1a48:7808:101:be76:4eff:fe09:1b3b/64 scope global valid_lft forever preferred_lft forever inet6 fe80::be76:4eff:fe09:1b3b/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether bc:76:4e:09:1b:3c brd ff:ff:ff:ff:ff:ff inet 10.181.140.163/19 brd 10.181.159.255 scope global eth1 inet6 fe80::be76:4eff:fe09:1b3c/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether bc:76:4e:08:ea:0a brd ff:ff:ff:ff:ff:ff inet 192.168.10.2/24 brd 192.168.10.255 scope global eth2 inet6 fe80::be76:4eff:fe08:ea0a/64 scope link valid_lft forever preferred_lft forever
And web02
‘s iptables
configuration currently looks like the below.
[root@glusterfs-web02 ~]# iptables -nvL --line-numbers Chain INPUT (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 12 824 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 2 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 3 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 4 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 ctstate NEW tcp dpt:22 5 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT 7 packets, 788 bytes) num pkts bytes target prot opt in out source destination
On web02
I insert an ACCEPT
rule at line 4, before the REJECT
rule. This rule allows all incomming connections on the eth2
interface.
# iptables -I INPUT 4 -i eth2 -p all -j ACCEPT
Now on web01
, probing web02
returns a better result.
[root@glusterfs-web01 ~]# gluster peer probe web02.dummydomains.org.uk peer probe: success.
Back on web02
, make sure you save the firewal configuration.
# service iptables save # service iptables reload
Now make the same changes on web01
, so that web02
returns the same result.
[root@glusterfs-web02 ~]# gluster peer probe web01.dummydomains.org.uk peer probe: success.
You might also want to use the peer status
to check what things look like.
[root@glusterfs-web01 ~]# gluster peer status Number of Peers: 1 Hostname: web02.dummydomains.org.uk Uuid: 29ca7ff7-f19b-4844-89de-6356ca4b51ff State: Peer in Cluster (Connected)
Don’t forget that gluster
is also a shell
– demonstrated from web02
below.
[root@glusterfs-web02 ~]# gluster gluster> peer status Number of Peers: 1 Hostname: 192.168.10.1 Uuid: 38fb3a93-133f-4588-95a0-5ec8cd5265e3 State: Peer in Cluster (Connected) Other names: web01.dummydomains.org.uk gluster> exit
….or, better yet!
[root@glusterfs-web01 ~]# gluster pool list UUID Hostname State 29ca7ff7-f19b-4844-89de-6356ca4b51ff web02.dummydomains.org.uk Connected 38fb3a93-133f-4588-95a0-5ec8cd5265e3 localhost Connected
And again from web02
.
[root@glusterfs-web02 ~]# gluster pool list UUID Hostname State 38fb3a93-133f-4588-95a0-5ec8cd5265e3 192.168.10.1 Connected 29ca7ff7-f19b-4844-89de-6356ca4b51ff localhost Connected
Create the Gluster volume with the following command. I had to use the force
option or it complained about creating volumes on the root partition.
[root@glusterfs-web01 ~]# gluster volume create dummydomainsVol replica 2 transport tcp web01.dummydomains.org.uk:/data web02.dummydomains.org.uk:/data force volume create: dummydomainsVol: success: please start the volume to access data
Then start the volume with.
[root@glusterfs-web01 ~]# gluster volume start dummydomainsVol volume start: dummydomainsVol: success
Check the status from any node and you should see something similar to the below.
[root@glusterfs-web02 ~]# gluster volume info Volume Name: dummydomainsVol Type: Replicate Volume ID: 4694564e-134d-4f85-9716-568a0a6f4156 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: web01.dummydomains.org.uk:/data Brick2: web02.dummydomains.org.uk:/data Options Reconfigured: performance.readdir-ahead: on
Here we mount the gluster volume (on any node) to the /var/www/vhosts
directory.
[root@glusterfs-web01 ~]# mount.glusterfs web02.dummydomains.org.uk:/dummydomainsVol /var/www/vhosts
Check the mount
output.
[root@glusterfs-web01 ~]# mount | grep web02 web02.dummydomains.org.uk:/dummydomainsVol on /var/www/vhosts type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
Hopefully that worked. You now want to un-mount the volume and mount it using fstab
s so it’s persistent across a reboot.
# umount -v /var/www/vhosts # vi /etc/fstab # mount -a # mount | grep web02 web02.dummydomains.org.uk:/dummydomainsVol on /var/www/vhosts type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
The contents of my fstab
looks like the below.
[root@glusterfs-web01 ~]# cat /etc/fstab
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/xvda1 / ext3 defaults,noatime,barrier=0 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
#/dev/xvdc1 none swap sw 0 0
web02.dummydomains.org.uk:/dummydomainsVol /var/www/vhosts glusterfs defaults,_netdev 0 0
Once you’ve also mounted the volume from web02
– you’re good to test it!
I ran the below command on web01
….
mkdir -v /var/www/vhosts/dummydomains.org.uk; for i in $(seq 1 20); do touch /var/www/vhosts/dummydomains.org.uk/web-file-$i.txt; done
…and then listed the following directory on web02
.
[root@glusterfs-web02 ~]# ls -l /var/www/vhosts/dummydomains.org.uk/ total 0 -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-10.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-11.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-12.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-13.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-14.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-15.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-16.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-17.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-18.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-19.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-1.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-20.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-2.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-3.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-4.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-5.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-6.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-7.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-8.txt -rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-9.txt
Nice!
Be the first to comment