The Nectar Research Cloud now supports Load Balancer-as-a-Service (LBaaS) for projects.

Conceptually, a Load Balancer is a service that listens for requests, and then forwards those requests on to servers within a pool. There are a number of reasons why you might want to do this, but the most common ones are for situations where higher-throughput than a single server is required and if you're attempting a high-availability architecture.


Image thanks to Rackspace https://blog.rackspace.com/openstack-load-balancing-as-a-service-lbaas

If you wish to use LBaaS, you'll need to apply for Load Balancer and Floating IP quota for your project. This can be done through the Nectar Research Cloud Dashboard, under the Allocations section for either new projects or as an amendment of requirements for existing projects.

A prerequisite for working with LBaaS is to understand the Nectar Advanced Networking service and the Openstack API. For more information you might also wish to see the Openstack LBaaS documentation.

Using LBaaS on the Nectar Research Cloud

There are a number of points that you should be aware of when using LBaaS on the Nectar Research Cloud:

  • It is only possible to create LBaaS resources on the Nectar Research Cloud using command line tools, namely python-openstackclient and python-neutronclient. Dashboard support for LBaaS will be coming some time in the future.
  • Load balancing is only available for TCP. Support for higher level protocols such as HTTP and HTTPS is currently not available.
  • Load balancing itself should be regarded as still a beta offering, rather than fully supported. See, for example https://blog.rackspace.com/openstack-load-balancing-as-a-service-lbaas.


An LBaaS example on the Nectar Research Cloud

Précis

The following tutorial will assume you already have a private network with a subnet called my-test-subnet and a router with melbourne set as the external gateway.

There are five main steps to setting up LBaaS for your Nectar project:

  1. Create the load balancer with a floating IP attached.
  2. Create a listener.
  3. Create a server pool.
  4. Create the backend servers and add them to the pool.
  5. (Optional but recommended) Create a jumpbox or management host for accessing the backend servers.

Create the load balancer

We'll start with creating a new load balancer and then attach it to our subnet.

$ neutron lbaas-loadbalancer-create --name my-test-lb my-test-subnet
Created a new loadbalancer:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| description         |                                      |
| id                  | fb11ce09-5187-4708-83a2-1fbe15a40467 |
| listeners           |                                      |
| name                | my-test-lb                           |
| operating_status    | ONLINE                               |
| pools               |                                      |
| provider            | midonet                              |
| provisioning_status | ACTIVE                               |
| tenant_id           | f42f9588576c43969760d81384b83b1f     |
| vip_address         | 172.31.6.11                          |
| vip_port_id         | 3985624a-3397-457d-a92c-3b6191082aa9 |
| vip_subnet_id       | 376e055d-ce1d-4a63-9262-85d9bafeca47 |
+---------------------+--------------------------------------+

We also want a floating IP for our load balancer:

$ openstack floating ip create melbourne 
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| created_at          | 2018-03-05T01:53:13Z                 |
| description         |                                      |
| fixed_ip_address    | None                                 |
| floating_ip_address | 103.6.252.17                         |
| floating_network_id | e48bdd06-cc3e-46e1-b7ea-64af43c74ef8 |
| id                  | 1af4e01b-54fe-4d8f-a3d3-40fda6d65290 |
| name                | 103.6.252.17                         |
| port_id             | None                                 |
| project_id          | f42f9588576c43969760d81384b83b1f     |
| revision_number     | 1                                    |
| router_id           | None                                 |
| status              | ACTIVE                               |
| updated_at          | 2018-03-05T01:53:13Z                 |
+---------------------+--------------------------------------+

Attach the floating IP to the load balancer's Virtual IP (VIP) port:

$ neutron floatingip-associate 1af4e01b-54fe-4d8f-a3d3-40fda6d65290 \
    $(neutron lbaas-loadbalancer-show my-test-lb -c vip_port_id -f value)

The load balancer also needs to allow selected traffic to pass through to the backend servers. We'll create a new security group opening up TCP port 80.

$ neutron security-group-create my-test-secgroup
$ neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 80 \
    --port-range-max 80 --remote-ip-prefix 0.0.0.0/0 my-test-secgroup
$ neutron port-update --security-group my-test-secgroup \
    $(neutron lbaas-loadbalancer-show my-test-lb -c vip_port_id -f value)

Create the listener

Now we'll create a listener for our load balancer. For our example, we're going to create just one listener on TCP port 80 for HTTP.

$ neutron lbaas-listener-create --name my-test-listener --loadbalancer my-test-lb \
    --protocol TCP --protocol-port 80
Created a new listener:
+---------------------------+------------------------------------------------+
| Field                     | Value                                          |
+---------------------------+------------------------------------------------+
| admin_state_up            | True                                           |
| connection_limit          | -1                                             |
| default_pool_id           |                                                |
| default_tls_container_ref |                                                |
| description               |                                                |
| id                        | 852718f8-14c2-4140-b062-927d410b3c10           |
| loadbalancers             | {"id": "fb11ce09-5187-4708-83a2-1fbe15a40467"} |
| name                      | my-test-listener                               |
| protocol                  | TCP                                            |
| protocol_port             | 80                                             |
| sni_container_refs        |                                                |
| tenant_id                 | f42f9588576c43969760d81384b83b1f               |
+---------------------------+------------------------------------------------+

Create the pool

Create a pool for our load balancer. The pool handles connections to our backend servers. Once our servers have been built, we'll add them to this pool.

$ neutron lbaas-pool-create --name my-test-pool --lb-algorithm ROUND_ROBIN \
    --listener my-test-listener --protocol TCP
Created a new pool:
+---------------------+------------------------------------------------+
| Field               | Value                                          |
+---------------------+------------------------------------------------+
| admin_state_up      | True                                           |
| description         |                                                |
| healthmonitor_id    |                                                |
| id                  | aee31483-30bf-4d57-9ff9-cf6cc432378a           |
| lb_algorithm        | ROUND_ROBIN                                    |
| listeners           | {"id": "852718f8-14c2-4140-b062-927d410b3c10"} |
| loadbalancers       | {"id": "fb11ce09-5187-4708-83a2-1fbe15a40467"} |
| members             |                                                |
| name                | my-test-pool                                   |
| protocol            | TCP                                            |
| session_persistence |                                                |
| tenant_id           | f42f9588576c43969760d81384b83b1f               |
+---------------------+------------------------------------------------+

Create the backend servers

Find the ID of the image we want to use for our servers. We pass this owner ID as a property to only show us Nectar officially supported images. We're going to choose Debian 9 for this example.

$ IMAGE_ID=$(openstack image list --property owner=28eadf5ad64b42a4929b2fb7df99275c \
    --name 'NeCTAR Debian 9 (Stretch) amd64' -c ID -f value)

Look up our network ID, because we'll need the ID for attaching servers to the network.

$ NETWORK_ID=$(openstack network show -c id -f value my-test-network)

Now we use that information to create the backend servers. For this demonstration, we create a simple cloud-init configuration which will install the NGINX web server and a single PHP page for it to serve.

The index.php file will be served at the root of our web server and print the hostname and request addresses. Later, we'll be able to validate which backend server is serving our request to the load balancer, and the address information it receives in the HTTP request header.

Save this config as user-data.txt, and we'll use this file in to our server create step:

#cloud-config

packages:
  - nginx
  - php7.0-fpm
 
write_files:
  - path: /tmp/default
    owner: root:root
    permissions: '0644'
    content: |
        server {
          listen 80 default_server;
          root /var/www/html;
          index index.php;
          server_name _;
          try_files $uri $uri/ /index.php;
          location ~ \.php$ {
            try_files $uri =404;
            include /etc/nginx/fastcgi.conf;
            fastcgi_pass unix:/run/php/php7.0-fpm.sock;
          }
        }
     
  - path: /tmp/index.php
    owner: root:root
    permissions: '0644'
    content: |
        <html>
        <head><title>Nginx-Server</title></head>
        <body>
        <h1>This is <?php echo gethostname(); ?></h1>
        <?php foreach ($_SERVER as $name => $value) {
          if (preg_match("/^.*_(PORT|ADDR|HOST)$/",$name)) {
            echo "<p>$name: $value</p>\n";
          }
        } ?>
        </body>
        </html>

runcmd:
  - mv /tmp/index.php /var/www/html/index.php
  - mv /tmp/default /etc/nginx/sites-available/default
  - service nginx restart
  - service php7.0-fpm restart

Create the servers, and add them as members to our pool.

for NUM in `seq 1 3`; do 
    FIXED_IP="172.31.6.10${NUM}"
    openstack server create --image $IMAGE_ID --flavor m1.small \
        --availability-zone melbourne-np --security-group my-test-secgroup --key-name my-key \
        --nic net-id=$NETWORK_ID,v4-fixed-ip=$FIXED_IP --user-data user-data.txt \
        my-test-server-$NUM
    neutron lbaas-member-create --subnet my-test-subnet --address $FIXED_IP \
        --protocol-port 80 my-test-pool
done

Using the fixed IPs in this way is not necessary but assists automation. You can really use any instances you like from within the subnet; the important part is the neutron lbaas-member-create command, where you must give the correct IP to the --address argument.

At this point, you can test whether your LBaaS is working. Use cURL to connect to the load balancer floating IP address:

$ curl http://103.6.252.17/
<html>
<head><title>Nginx-Server</title></head>
<body>
<h1>This is my-test-server-1</h1>
<p>HTTP_HOST: 103.6.252.17</p>
<p>SERVER_PORT: 80</p>
<p>SERVER_ADDR: 172.31.6.101</p>
<p>REMOTE_ADDR: 172.31.6.11</p>
</body>
</html>

With subsequent connections, you should see that the backend server serving the request changes. You can also see that the remote address seen by pool members is the load balancer's VIP address.

Create the management host

You will probably need to have SSH access to the backend servers in order to administer them. You can't go through the load balancer for this since even if you set up a listener for ssh, it will not let you control which server you are connecting to. Instead, we set up a management host or jumpbox. This refers to a machine requiring very few resources, and having little purpose other than to offer a public-facing SSH service that can then be used to tunnel through to a private network.

We assume you already have a security group called ssh which opens up port 22 with a CIDR of 0.0.0.0/0.

$ openstack server create --image 8cdf754b-50a7-4845-b42c-863c52abea1b --flavor m1.small \
    --availability-zone melbourne-np --security-group ssh --key-name my-key \
    --nic net-id=$NETWORK_ID --wait my-test-server-mgmt
+-----------------------------+------------------------------------------------------------------------+
| Field                       | Value                                                                  |
+-----------------------------+------------------------------------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                                                 |
| OS-EXT-AZ:availability_zone | melbourne-np                                                           |
| OS-EXT-STS:power_state      | Running                                                                |
| OS-EXT-STS:task_state       | None                                                                   |
| OS-EXT-STS:vm_state         | active                                                                 |
| OS-SRV-USG:launched_at      | 2018-03-05T01:13:11.000000                                             |
| OS-SRV-USG:terminated_at    | None                                                                   |
| accessIPv4                  |                                                                        |
| accessIPv6                  |                                                                        |
| addresses                   | my-test-network=172.31.6.100                                           |
| adminPass                   | iVzjn8JW4vbS                                                           |
| config_drive                |                                                                        |
| created                     | 2018-03-05T01:12:42Z                                                   |
| flavor                      | m1.small (0)                                                           |
| hostId                      | f9a1738800713772fdeadbc7b70d624032035c519fa4606d26637389               |
| id                          | 0bfff024-31fb-4fe6-9933-6ab3414c465b                                   |
| image                       | NeCTAR Debian 9 (Stretch) amd64 (8cdf754b-50a7-4845-b42c-863c52abea1b) |
| key_name                    | my-key                                                                 |
| name                        | my-test-server-mgmt                                                    |
| progress                    | 0                                                                      |
| project_id                  | f42f9588576c43969760d81384b83b1f                                       |
| properties                  |                                                                        |
| security_groups             | name='ssh'                                                             |
| status                      | ACTIVE                                                                 |
| updated                     | 2018-03-05T01:13:12Z                                                   |
| user_id                     | 4b5fe43d1c324775b8e8ecdf7db492ad                                       |
| volumes_attached            |                                                                        |
+-----------------------------+------------------------------------------------------------------------+

Create a public floating IP so we can connect to our jumpbox.

$ openstack floating ip create melbourne 
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| created_at          | 2018-01-10T05:05:45Z                 |
| description         |                                      |
| fixed_ip_address    | None                                 |
| floating_ip_address | 103.6.252.9                          |
| floating_network_id | e48bdd06-cc3e-46e1-b7ea-64af43c74ef8 |
| id                  | 70536738-4f5d-4704-b992-fa681da43218 |
| name                | 103.6.252.9                          |
| port_id             | 58243f0a-3864-4625-96ef-6b1773c69f8a |
| project_id          | f42f9588576c43969760d81384b83b1f     |
| revision_number     | 16                                   |
| router_id           | None                                 |
| status              | ACTIVE                               |
| updated_at          | 2018-03-05T01:29:46Z                 |
+---------------------+--------------------------------------+

Look up the port ID for our management server and assign our floating IP to it.

$ MGMT_PORT_ID=$(openstack port list --server my-test-server-mgmt -c ID -f value)
$ neutron floatingip-associate 70536738-4f5d-4704-b992-fa681da43218 $MGMT_PORT_ID
Associated floating IP 1af4e01b-54fe-4d8f-a3d3-40fda6d65290

Access to the backend servers can then be accomplished with settings in your ssh client configuration similar to:

Host my-test-server-1
Hostname 172.31.6.101
User debian
ProxyJump debian@103.6.252.9

And then using the actual ssh command:

ssh my-test-server-1

Port Forwarding

We used port 80 at both the load balancer and the pool members in the example above. LBaaS also allows you forward requests from one port on the load balancer to another port on pool members. We can reconfigure our example to forward port 80 on the load balancer to port 8080 on the pool members as follows.

First, we need to create a new security group allowing port 8080 for the pool members. The load balancer will continue using the security group we created earlier for port 80.

$ openstack security group create my-test-pool-secgroup
$ openstack security group rule create --ingress --protocol tcp \
    --dst-port 8080 --remote-ip 0.0.0.0/0 my-test-pool-secgroup

Next, remove the current security group allowing port 80 from each of the pool members and add the new security group allowing port 8080 in its place. Repeat for all three members, my-test-server-1, my-test-server-2, and my-test-server-3 in our example. 

$ openstack server remove security group my-test-server-1 my-test-secgroup
$ openstack server add security group my-test-server-1 my-test-pool-secgroup

Get the ids of each of the current pool members. 

$ neutron lbaas-member-list my-test-pool

Delete each pool member using its id, then re-create all three with the existing backend server IP address (172.31.6.101-103) and new protocol port, 8080 in our case. 

$ neutron lbaas-member-delete <member-id> my-test-pool
$ neutron lbaas-member-create --subnet my-test-subnet --address <backend-ip> \
    --protocol-port 8080 my-test-pool

The last step is updating the web server configuration on each pool member to listen on port 8080. SSH to each member, edit the NGINX configuration file, and restart the NGINX service.

$ ssh my-test-server-1

$ sudo vi /etc/nginx/sites-enabled/default
...
# listen 80 default_server;
listen 8080 default_server;
...

$ sudo service nginx restart

You can test all pool members are accessible using cURL, as we did earlier. You should see the server port in the response is now port 8080

$ curl http://103.6.252.17/
...
<p>SERVER_PORT: 8080</p>
...