Multipule Web Nodes using GlusterFS with CentOS 6.6

Check to see if there are any trusted storage pools already configured.

# gluster peer status
Number of Peers: 0

Check you can establish communication from web01 to web02.

[root@glusterfs-web01 ~]# gluster peer probe web02.dummydomains.org.uk
peer probe: failed: Probe returned with unknown errno 107

This will fail if the peer isn’t listen or if the firewall is blocking communication on the following ports:

111 tcp
24007 tcp 
24008 tcp
24009 tcp

Here I am using Rackspace cloud networks to create a virtual private network of 192.168.10.0/24 and attach it to each of the web nodes. To ensure I connect using the private IP when I try to connect to web02.dummydomains.org.uk from web01.dummydomains.org.uk, I need to edit my hosts configuration file. This will take precedence over DNS, which would return my public IP address.

My hosts file on web01 now looks like this:


127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
2a00:1a48:7808:101:be76:4eff:fe08:9cec glusterfs-web01
162.13.183.215 glusterfs-web01
10.181.138.233 glusterfs-web01
192.168.10.1 glusterfs-web01
192.168.10.2 glusterfs-web02 web02.dummydomains.org.uk web02

While on web02, it looks like this:


127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
2a00:1a48:7808:101:be76:4eff:fe09:1b3b glusterfs-web02
10.181.140.163 glusterfs-web02
162.13.184.243 glusterfs-web02
192.168.10.2 glusterfs-web02
192.168.10.1 glusterfs-web01 web01.dummydomains.org.uk web01

Check you can use the hostname to connect to the correct (private) IP address. You can use ping for that.

[root@glusterfs-web01 ~]# ping -c2 web02 
PING glusterfs-web02 (192.168.10.2) 56(84) bytes of data.
64 bytes from glusterfs-web02 (192.168.10.2): icmp_seq=1 ttl=64 time=0.894 ms
64 bytes from glusterfs-web02 (192.168.10.2): icmp_seq=2 ttl=64 time=0.393 ms

--- glusterfs-web02 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.393/0.643/0.894/0.251 ms

You need to check this on both (or all) web nodes.

[root@glusterfs-web02 ~]# ping -c2 web01
PING glusterfs-web01 (192.168.10.1) 56(84) bytes of data.
64 bytes from glusterfs-web01 (192.168.10.1): icmp_seq=1 ttl=64 time=0.933 ms
64 bytes from glusterfs-web01 (192.168.10.1): icmp_seq=2 ttl=64 time=0.383 ms

--- glusterfs-web01 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.383/0.658/0.933/0.275 ms

Here we have to allow incoming connections by altering the iptables configuration on all web nodes. My network setup on web02 is shown below. The private network is on the eth2 interface.

[root@glusterfs-web02 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether bc:76:4e:09:1b:3b brd ff:ff:ff:ff:ff:ff
    inet 162.13.184.243/24 brd 162.13.184.255 scope global eth0
    inet6 2a00:1a48:7808:101:be76:4eff:fe09:1b3b/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::be76:4eff:fe09:1b3b/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether bc:76:4e:09:1b:3c brd ff:ff:ff:ff:ff:ff
    inet 10.181.140.163/19 brd 10.181.159.255 scope global eth1
    inet6 fe80::be76:4eff:fe09:1b3c/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether bc:76:4e:08:ea:0a brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.2/24 brd 192.168.10.255 scope global eth2
    inet6 fe80::be76:4eff:fe08:ea0a/64 scope link 
       valid_lft forever preferred_lft forever

And web02‘s iptables configuration currently looks like the below.

[root@glusterfs-web02 ~]# iptables -nvL --line-numbers
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1       12   824 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           ctstate RELATED,ESTABLISHED 
2        0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0           
3        0     0 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
4        0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           ctstate NEW tcp dpt:22 
5        0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1        0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain OUTPUT (policy ACCEPT 7 packets, 788 bytes)
num   pkts bytes target     prot opt in     out     source               destination

On web02 I insert an ACCEPT rule at line 4, before the REJECT rule. This rule allows all incomming connections on the eth2 interface.

# iptables -I INPUT 4 -i eth2 -p all -j ACCEPT

Now on web01, probing web02 returns a better result.

[root@glusterfs-web01 ~]# gluster peer probe web02.dummydomains.org.uk
peer probe: success.

Back on web02, make sure you save the firewal configuration.

# service iptables save
# service iptables reload

Now make the same changes on web01, so that web02 returns the same result.

[root@glusterfs-web02 ~]# gluster peer probe web01.dummydomains.org.uk
peer probe: success.

You might also want to use the peer status to check what things look like.

[root@glusterfs-web01 ~]# gluster peer status
Number of Peers: 1

Hostname: web02.dummydomains.org.uk
Uuid: 29ca7ff7-f19b-4844-89de-6356ca4b51ff
State: Peer in Cluster (Connected)

Don’t forget that gluster is also a shell – demonstrated from web02 below.

[root@glusterfs-web02 ~]# gluster
gluster> peer status
Number of Peers: 1

Hostname: 192.168.10.1
Uuid: 38fb3a93-133f-4588-95a0-5ec8cd5265e3
State: Peer in Cluster (Connected)
Other names:
web01.dummydomains.org.uk
gluster> exit

….or, better yet!

[root@glusterfs-web01 ~]# gluster pool list
UUID					Hostname                 	State
29ca7ff7-f19b-4844-89de-6356ca4b51ff	web02.dummydomains.org.uk	Connected 
38fb3a93-133f-4588-95a0-5ec8cd5265e3	localhost                	Connected

And again from web02.

[root@glusterfs-web02 ~]# gluster pool list
UUID					Hostname    	State
38fb3a93-133f-4588-95a0-5ec8cd5265e3	192.168.10.1	Connected 
29ca7ff7-f19b-4844-89de-6356ca4b51ff	localhost   	Connected

Create the Gluster volume with the following command. I had to use the force option or it complained about creating volumes on the root partition.

[root@glusterfs-web01 ~]# gluster volume create dummydomainsVol replica 2 transport tcp web01.dummydomains.org.uk:/data web02.dummydomains.org.uk:/data force
volume create: dummydomainsVol: success: please start the volume to access data

Then start the volume with.

[root@glusterfs-web01 ~]# gluster volume start dummydomainsVol
volume start: dummydomainsVol: success

Check the status from any node and you should see something similar to the below.

[root@glusterfs-web02 ~]# gluster volume info
 
Volume Name: dummydomainsVol
Type: Replicate
Volume ID: 4694564e-134d-4f85-9716-568a0a6f4156
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: web01.dummydomains.org.uk:/data
Brick2: web02.dummydomains.org.uk:/data
Options Reconfigured:
performance.readdir-ahead: on

Here we mount the gluster volume (on any node) to the /var/www/vhosts directory.

[root@glusterfs-web01 ~]# mount.glusterfs web02.dummydomains.org.uk:/dummydomainsVol /var/www/vhosts

Check the mount output.

[root@glusterfs-web01 ~]# mount | grep web02
web02.dummydomains.org.uk:/dummydomainsVol on /var/www/vhosts type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

Hopefully that worked. You now want to un-mount the volume and mount it using fstabs so it’s persistent across a reboot.

# umount -v /var/www/vhosts
# vi /etc/fstab
# mount -a
# mount | grep web02       
web02.dummydomains.org.uk:/dummydomainsVol on /var/www/vhosts type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

The contents of my fstab looks like the below.


[root@glusterfs-web01 ~]# cat /etc/fstab
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/xvda1 / ext3 defaults,noatime,barrier=0 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
#/dev/xvdc1 none swap sw 0 0
web02.dummydomains.org.uk:/dummydomainsVol /var/www/vhosts glusterfs defaults,_netdev 0 0

Once you’ve also mounted the volume from web02 – you’re good to test it!

I ran the below command on web01….

mkdir -v /var/www/vhosts/dummydomains.org.uk; for i in $(seq 1 20); do touch /var/www/vhosts/dummydomains.org.uk/web-file-$i.txt; done

…and then listed the following directory on web02.

[root@glusterfs-web02 ~]# ls -l /var/www/vhosts/dummydomains.org.uk/
total 0
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-10.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-11.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-12.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-13.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-14.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-15.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-16.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-17.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-18.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-19.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-1.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-20.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-2.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-3.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-4.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-5.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-6.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-7.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-8.txt
-rw-r--r-- 1 root root 0 Jul 17 01:03 web-file-9.txt

Nice!

Installing GlusterFS on CentOS 6.6

You will need to download the following repository file to the /etc/yum.repos.d/ directory before trying to install the glusterfs-server package.

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo
yum install glusterfs-server

It will pull-in a load of dependencies…

===========================================================================================================================================
 Package                                      Arch                  Version                            Repository                     Size
===========================================================================================================================================
Installing:
 glusterfs-server                             x86_64                3.7.2-3.el6                        glusterfs-epel                1.2 M
Installing for dependencies:
 device-mapper-event                          x86_64                1.02.90-2.el6_6.3                  updates                       122 k
 device-mapper-event-libs                     x86_64                1.02.90-2.el6_6.3                  updates                       116 k
 device-mapper-persistent-data                x86_64                0.3.2-1.el6                        base                          2.5 M
 glusterfs-cli                                x86_64                3.7.2-3.el6                        glusterfs-epel                155 k
 glusterfs-client-xlators                     x86_64                3.7.2-3.el6                        glusterfs-epel                919 k
 glusterfs-fuse                               x86_64                3.7.2-3.el6                        glusterfs-epel                119 k
 keyutils                                     x86_64                1.4-5.el6                          base                           39 k
 libevent                                     x86_64                1.4.13-4.el6                       base                           66 k
 libgssglue                                   x86_64                0.1-11.el6                         base                           23 k
 libtirpc                                     x86_64                0.2.1-10.el6                       base                           79 k
 lvm2                                         x86_64                2.02.111-2.el6_6.3                 updates                       817 k
 lvm2-libs                                    x86_64                2.02.111-2.el6_6.3                 updates                       901 k
 nfs-utils                                    x86_64                1:1.2.3-54.el6                     base                          326 k
 nfs-utils-lib                                x86_64                1.1.5-9.el6_6                      updates                        68 k
 pyxattr                                      x86_64                0.5.0-1.el6                        epel                           24 k
 rpcbind                                      x86_64                0.2.0-11.el6                       base                           51 k
 userspace-rcu                                x86_64                0.7.7-1.el6                        epel                           60 k
Updating for dependencies:
 glusterfs                                    x86_64                3.7.2-3.el6                        glusterfs-epel                416 k
 glusterfs-api                                x86_64                3.7.2-3.el6                        glusterfs-epel                 72 k
 glusterfs-libs                               x86_64                3.7.2-3.el6                        glusterfs-epel                318 k

Transaction Summary
===========================================================================================================================================
Install      18 Package(s)
Upgrade       3 Package(s)

Total download size: 8.3 M
Is this ok [y/N]: y

…Accept the GPG key imports you are alerted to and proceed with the installation.

warning: rpmts_HdrFromFdno: Header V4 RSA/SHA1 Signature, key ID 4ab22bb3: NOKEY
Retrieving key from http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key
Importing GPG key 0x4AB22BB3:
 Userid: "Gluster Packager <glusterpackager@download.gluster.org>"
 From  : http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key
Is this ok [y/N]: y
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
Importing GPG key 0x0608B895:
 Userid : EPEL (6) <epel@fedoraproject.org>
 Package: epel-release-6-8.noarch (@epel/6.6)
 From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
Is this ok [y/N]: y

Ensure the glusterfs daemon is set to start on boot and start the service.

chkconfig --levels 235 glusterd on
service glusterd start

You can check the status with….

[root@glusterfs-web01 ~]# /etc/init.d/glusterfsd status
glusterfsd is stopped
[root@glusterfs-web01 ~]# /etc/init.d/glusterd status  
glusterd (pid 6148) is running...

You can check the version with the following.

[root@glusterfs-web01 ~]# glusterfsd --version
glusterfs 3.7.2 built on Jun 23 2015 12:13:11
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc.
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

Rackspace Cloud Monitoring Agent

The Rackspace cloud monitoring agent allows you to monitor CPU, memory, filesystem usage and system processes. It does this by collecting information about the system and pushing it out to Rackspace Cloud Monitoring web services, where they can be analyzed, graphed, and alerted on. It is this technology that the Rackspace monitoring checks are built upon.

Plus you get a nice pretty little bar graph in the server details section of the control panel 🙂

Rackspace monitoring agent

Install the Agent

While the instructions used here are for Ubuntu 14.04 LTS, this page lists the exact commands needed for all major distros.

wget http://meta.packages.cloudmonitoring.rackspace.com/ubuntu-14.04-x86_64/rackspace-cloud-monitoring-meta-stable_1.0_all.deb
dpkg -i rackspace-cloud-monitoring-meta-stable_1.0_all.deb
apt-get update
apt-get install rackspace-monitoring-agent

If your distribution of choice isn’t listed, you can always install from source.

Configure and Start Daemon

If the /etc/rackspace-monitoring-agent.cfg file isn’t present, you will need to choose one of the methods below to start the service.

Quick Method

Run the below commands, replacing the username and API key with your own.

rackspace-monitoring-agent --setup --username <your-username> --apikey <your-api-key>
rackspace-monitoring-agent start -D

Interactive Method

Alternatively you can simply run the below to interactively enter your username and your API key or password.

rackspace-monitoring-agent --setup

Followed by…

service rackspace-monitoring-agent start

Updating

The monitoring agent does not update itself. However, if you installed using a package manager, such as apt-get, agent updates will be pulled in and applied with regular system updates anyway.

apt-get update
apt-get dist-upgrade

Uninstalling the Agent

Assuming you didn’t install from source and you used your distros package manager, you will uninstall with the same method. I am using Ubuntu, so…

apt-get remove rackspace-monitoring-agent

Or if you’re using CentOS/RHEL.

yum remove rackspace-monitoring-agent

Related Documents

https://github.com/virgo-agent-toolkit/rackspace-monitoring-agent

http://www.rackspace.com/knowledge_center/article/install-and-configure-the-cloud-monitoring-agent#UpgradeAgent

http://meta.packages.cloudmonitoring.rackspace.com/

http://docs.rackspace.com/cm/api/v1.0/cm-devguide/content/install-configure.html

http://www.rackspace.com/knowledge_center/article/about-the-cloud-monitoring-agent

Protect Your Cloud Infrastructure Servers with Isolated Cloud Networks

Create a Private Cloud Network

Create an isolated cloud network. Here I am using the supernova client to communicate with the Rackspace OpenStack API.

supernova uk network-create "Infrastructure" "192.168.3.0/24"
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| cidr     | 192.168.3.0/24                       |
| id       | 4d15b8ad-45c5-4169-a4fa-d36f1a776efd |
| label    | Infrastructure                       |
+----------+--------------------------------------+

Take note of the id – you’ll need it shortly!

Create a Proxy Server and Attach to the Private Network

supernova uk boot proxy-bast --flavor 2 --image 189678ca-fe2c-4b7a-a986-30c3660edfa5 --nic net-id=4d15b8ad-45c5-4169-a4fa-d36f1a776efd

The above creates a server using the CentOS 6.6 image. Other images of interest are:

+--------------------------------------+------------------------------------------+--------+
| ID                                   | Name                                     | Status |
+--------------------------------------+------------------------------------------+--------+
| 189678ca-fe2c-4b7a-a986-30c3660edfa5 | CentOS 6 (PVHVM)                         | ACTIVE |
| f8ae535e-67c0-41a5-bf55-b06d0ee40cc2 | CentOS 7 (PVHVM)                         | ACTIVE |
| 6909f56c-bd77-411a-8c0e-c37876b68d1d | Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)   | ACTIVE |
+--------------------------------------+------------------------------------------+--------+

Proxy Bastion Configuration

Later we create a cloud server with no public IP, which is protected by sitting behind our proxy bastion. From the bastion side, in order for our protected server to have access to the internet, we need to apply firewall rules for IP forwarding and Network Address Translation. This process differs depending on which distribution you use. Here I cover CentOS 6.6, CentOS 7 and Ubutnu 14.04.

CentOS 6.6

Under CentOS 6.6 and before, you need to configure IPTables to do the forwarding and the Network Address Translation (NAT). We will be forwarding the traffic from the eth2 interface, out through the eth0 interface. We also use Static NAT or MASQUERADE so that traffic coming from our protected infrastructure, takes on the public IP address of our proxy bastion.

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether bc:76:4e:08:40:d8 brd ff:ff:ff:ff:ff:ff
    inet 95.138.163.75/24 brd 95.138.163.255 scope global eth0
    inet6 2a00:1a48:7805:113:be76:4eff:fe08:40d8/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::be76:4eff:fe08:40d8/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether bc:76:4e:08:3d:31 brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.1/24 brd 192.168.3.255 scope global eth2
    inet6 fe80::be76:4eff:fe08:3d31/64 scope link 
       valid_lft forever preferred_lft forever
Enable IP Forwarding

To enable forwarding, you need to enable it in two places. One in /proc/sys/net/ipv4/ip_forward.

echo 1 > /proc/sys/net/ipv4/ip_forward

And the other in /etc/sysctl.conf. The below uses grep check the value of net.ipv4.ip_forward.

grep net.ipv4.ip_forward /etc/sysctl.conf 
net.ipv4.ip_forward = 0

If zero, enable with a one as shown below.

net.ipv4.ip_forward = 1
Configure Static NAT and Forwarding Rules
iptables --table nat --append POSTROUTING --out-interface eth0 -j SNAT --to 95.138.163.75
iptables --append FORWARD --in-interface eth2 -j ACCEPT
service iptables save

We also need to remove the default reject rule on the FORWARD’ing table:

iptables -D FORWARD 1

Here I delete rule number one from the FORWARD table. Make sure you delete the correct line. To see the line numbers, use:

[root@proxy-bast ~]# iptables -vnL --line-number
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1    44444   62M ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           ctstate RELATED,ESTABLISHED 
2        0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0           
3        0     0 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
4        1    60 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           ctstate NEW tcp dpt:22 
5        1    40 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1        0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain OUTPUT (policy ACCEPT 8769 packets, 544K bytes)
num   pkts bytes target     prot opt in     out     source               destination

Make sure you have restarted everything.

service iptables restart
service network restart

Now configure the default gateway on the infrastructure server.

CentOS 7

With the introduction of firewalld, CentOS 7 now does things a little differently.

Method 1

This method uses the predefined zones available to us and is by far the easiest method to apply. The external zone has IP masquerading enabled by default so there should be little to do.

Define Your Zones

To view your zone setup.

[root@proxy-bast ~]# firewall-cmd --get-default-zone
public
[root@proxy-bast ~]# firewall-cmd --get-active-zones
public
  interfaces: eth0 eth1 eth2

To see the supported predefined zones , use the --get-zones–list-all-zones option.

firewall-cmd --list-all-zones

The zones I will be using are external, work and internal.

external
  interfaces: 
  sources: 
  services: ssh
  ports: 
  masquerade: yes
  forward-ports: 
  icmp-blocks: 
  rich rules:

work
  interfaces: 
  sources: 
  services: dhcpv6-client ipp-client ssh
  ports: 
  masquerade: no
  forward-ports: 
  icmp-blocks: 
  rich rules: 

internal
  interfaces: 
  sources: 
  services: dhcpv6-client ipp-client mdns samba-client ssh
  ports: 
  masquerade: no
  forward-ports: 
  icmp-blocks: 
  rich rules: 

My setup looks like this…

Port	Firewall Zone	Name						IPv4				
------------------------------------------------------------------------
eth0	external		PublicNet (Internet)		162.13.87.197		
eth1	work			ServiceNet (Rackspace)		10.179.198.73		
eth2	internal		Infrastructure				192.168.3.1

…and can be achieved with the below commands. Don’t forget to restart firewalld!

firewall-cmd --permanent --zone=external --change-interface=eth0
firewall-cmd --permanent --zone=work --change-interface=eth1
firewall-cmd --permanent --zone=internal --change-interface=eth2
firewall-cmd --reload
systemctl restart firewalld
Method 2

With this method we use the --direct option so we can include traditional iptable rules.

Enable IP Forwarding

This step is not needed if you are using the predefined “external” zone provided by firewalld, as masquerade is enabled by default already.

echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf

To check its enabled.

[root@proxy-bast ~]# sysctl -p
net.ipv4.conf.eth0.arp_notify = 1
vm.swappiness = 0
net.ipv4.ip_forward = 1
Configure Static NAT and Forwarding Rules
firewall-cmd --permanent --direct --passthrough ipv4 -t nat -I POSTROUTING --out-interface eth0 -j SNAT --to 162.13.87.197
firewall-cmd --permanent --direct --passthrough ipv4 --append FORWARD --in-interface eth2 -j ACCEPT
firewall-cmd --reload

systemctl restart network
systemctl restart firewalld
Method 2

Revert back to the tried and tested iptables.

Revert back to Using IPTables
systemctl stop firewalld
systemctl disable firewalld

iptables-service

touch /etc/sysconfig/iptables
systemctl start iptables
systemctl enable iptables

touch /etc/sysconfig/ip6tables
systemctl start ip6tables
systemctl enable ip6table

Now you can follow the instructions for CentOS 6.6.

Ubuntu 14.04 LTS

In Ubuntu we use the Uncomplicated Firewall (UFW).

Enable IP Forwarding

Use a text editor to open up the below file as root…

nano /etc/default/ufw

…and enable the default forward policy – change to ACCEPT.

DEFAULT_FORWARD_POLICY="ACCEPT"

We also need to edit the below…

nano /etc/ufw/sysctl.conf

…and uncomment the following lines.

net/ipv4/ip_forward=1
net/ipv6/conf/default/forwarding=1
Configure Static NAT and Forwarding Rules

As root, open the below file.

nano /etc/ufw/before.rules

From the top, my configuration file looks like the below. I inserted the lines in bold.

#
# rules.before
#
# Rules that should be run before the ufw command line added rules. Custom
# rules should be added to one of these chains:
#   ufw-before-input
#   ufw-before-output
#   ufw-before-forward
#
# nat Table rules
*nat
:POSTROUTING ACCEPT [0:0]
:PREROUTING ACCEPT [0:0]

-A POSTROUTING -s 192.168.3.0/24 -o eth0 -j SNAT --to-source 162.13.87.197
-A PREROUTING -i eth2 -j ACCEPT
COMMIT


# Don't delete these required lines, otherwise there will be errors
*filter
:ufw-before-input - [0:0]
:ufw-before-output - [0:0]
:ufw-before-forward - [0:0]
:ufw-not-local - [0:0]
# End required lines


# allow all on loopback
-A ufw-before-input -i lo -j ACCEPT

...

You will need to restart ufw for the changes to take effect.

ufw disable && sudo ufw enable

For some reason this wiped my SSH rule:

ufw allow ssh
ufw reload
ufw status verbose

Create Infrastructure Server

Here we spin-up a server connected to our isolated cloud network and no public interface. All communications must go via the proxy-bast server.

supernova uk boot protected --flavor 2 --image 189678ca-fe2c-4b7a-a986-30c3660edfa5 --nic net-id=4d15b8ad-45c5-4169-a4fa-d36f1a776efd --no-service-net --no-public

Configure Internet Gateway

Here we simply need to route the traffic through the proxy bastion. We do this by defining it as our default gateway. We also need to set our DNS servers.

CentOS 6.6

Simplicity!

echo "GATEWAY=192.168.3.1" >> /etc/sysconfig/network
echo "nameserver 83.138.151.80" >> /etc/resolv.conf
echo "nameserver 83.138.151.81" >> /etc/resolv.conf
service network restart

CentOS 7

The default image provided by Rackspace comes with nmcli disabled. As such the process is similar to previous releases.

echo "GATEWAY=192.168.3.1" >> /etc/sysconfig/network
echo "nameserver 83.138.151.80" >> /etc/resolv.conf
echo "nameserver 83.138.151.81" >> /etc/resolv.conf
echo "DNS1=83.138.151.80" >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo "DNS2=83.138.151.81" >> /etc/sysconfig/network-scripts/ifcfg-eth0
systemctl restart network

Ubuntu 14.04 LTS

To define the default gateway, you need to edit the /etc/network/interfaces file.

nano /etc/network/interfaces

Mine looks like this. Make sure to add the gateway.

auto eth0
iface eth0 inet static
    address 192.168.3.4
    netmask 255.255.255.0
    gateway 192.168.3.1

You will need to manually add Rackspaces name servers to your resolv.conf. However on Ubuntu this file is automatically generated. Instead we editing /etc/resolvconf/resolv.conf.d/base and regenerate the file using the resolvconf command.

root@protected:~# cat /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
root@protected:~# echo "nameserver 83.138.151.80" >> /etc/resolvconf/resolv.conf.d/base
root@protected:~# echo "nameserver 83.138.151.81" >> /etc/resolvconf/resolv.conf.d/base
root@protected:~# resolvconf -u
root@protected:~# cat /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 83.138.151.80
nameserver 83.138.151.81

I needed to reboot for the changes to take effect.

reboot

Related Documents

Rackspace Developer Blog: Protect your Infrastructure Servers with Bastion Hosts and Isolated Cloud Networks

Rackspace Developer Blog: Supernova: Managing OpenStack Environments Made Easy

Rackspace Knowledge Centre: Using OnMetal Cloud Servers through API

Fedora: Firewalld

Oracle-Base: Linux Firewall (firewalld, firewall-cmd, firewall-config)

Kevin’s Cheat Sheet: Configure iptables to act as a NAT gateway

Rackspace Developer Blog: Getting Started: Using rackspace-novaclient to manage Cloud Servers

James Rossiter: Forward ports in Ubuntu Server 12.04 using ufw

Ubuntu Documentation: Firewall

Github: UFW

Code Ghar: Ubuntu 12.04 IPv4 NAT Gateway and DHCP Server

Linux Gateway: A More Complex Firewall

netfilter.org: Saying How to Mangle the Packets

Ubuntu Documentation: IptablesHowTo

Major.io: Delete single iptables rules

iptables.info: Iptables

snipt.net: Insert an iptables rule on a specific line number with a comment, and restore all rules after reboot

stackexchange.com: How do I set my DNS on Ubuntu 14.04?

thesimplesynthesis.com: How to Set a Static IP and DNS in Ubuntu 14.04

Rackspace Knowledge Centre: Ubuntu – Setup

Rackspace Knowledge Centre: Introduction to iptables

Rackspace Knowledge Centre: Sample iptables ruleset

Ubuntu Geek: Howto add permanent static routes in Ubuntu

NixCraft: Debian / Ubuntu Linux Setting a Default Gateway

Ask Ubuntu: Set up permanent routing (Ubuntu 13.04)

cviorel.com: How to set up a VPN server on Ubuntu

Redhat Support: 10.4. Static Routes and the Default Gateway