The Nectar Research Cloud supports Load Balancer-as-a-Service (LBaaS) for projects.
Conceptually, a Load Balancer is a service that listens for requests, and then forwards those requests on to servers within a pool. There are a number of reasons why you might want to do this, but the most common ones are for situations where higher-throughput than a single server is required and if you're attempting a high-availability architecture.
Image thanks to Rackspace https://blog.rackspace.com/openstack-load-balancing-as-a-service-lbaas
Load Balancers and Floating IPs are included in the Resource Bundles when you submit an Allocation Request or make an Allocation Amendment on the Nectar Research Cloud Dashboard. The quota of resources included in each Resource Bundle are listed in the Resource Bundle Table. Selecting the Custom Resource Bundle will allow you to specify the Load Balancer and Floating IP quota for your project.
A prerequisite for working with LBaaS is to understand the Nectar Advanced Networking service. For more information you might also wish to see the Openstack LBaaS documentation.
Using LBaaS on the Nectar Research Cloud
Dashboard
You can create and manage your load balancers via the Nectar Dashboard under the Network section. We have a full tutorial for the dashboard here.
Command Line
There are five main steps to setting up LBaaS for your Nectar project:
- Create the load balancer.
- Attach a floating IP to the load balancer
- Create a listener.
- Create a server pool.
- Create the backend servers and add them to the pool.
- (Optional but recommended) Create a jumpbox or management host for accessing the backend servers.
For more information, please read our full Tutorial on Loadbalancing.
Create the load balancer
We'll start with creating a new load balancer and attach it to our subnet. (note the argument is subnet-id but a name will work)
$ openstack loadbalancer create --name my-test-lb --availability-zone melbourne --vip-subnet-id my-private-subnet
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone | melbourne |
| created_at | 2020-05-14T04:13:54 |
| description | |
| flavor_id | None |
| id | edfebfcc-bc2c-49f9-9390-32a3ee2de8e3 |
| listeners | |
| name | my-test-lb |
| operating_status | OFFLINE |
| pools | |
| project_id | 0bdf024c921848c4b74d9e69af9edf08 |
| provider | amphora |
| provisioning_status | PENDING_CREATE |
| updated_at | None |
| vip_address | 192.168.2.28 |
| vip_network_id | 01d16b27-19ab-4865-af13-c2db83888ad8 |
| vip_port_id | 6d3a653f-6a26-4d96-9b0d-6bcc249543c4 |
| vip_qos_policy_id | None |
| vip_subnet_id | 6b72925f-fa00-4fe1-b7f0-a157cfd15208 |
+---------------------+--------------------------------------+
We also want a floating IP for our load balancer:
$ openstack floating ip create melbourne
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2018-03-05T01:53:13Z |
| description | |
| fixed_ip_address | None |
| floating_ip_address | 103.6.252.17 |
| floating_network_id | e48bdd06-cc3e-46e1-b7ea-64af43c74ef8 |
| id | 1af4e01b-54fe-4d8f-a3d3-40fda6d65290 |
| name | 103.6.252.17 |
| port_id | None |
| project_id | f42f9588576c43969760d81384b83b1f |
| revision_number | 1 |
| router_id | None |
| status | ACTIVE |
| updated_at | 2018-03-05T01:53:13Z |
+---------------------+--------------------------------------+
Attach the floating IP to the load balancer's Virtual IP (VIP) port: (port is vip_port_id from command above)
$ openstack floating ip set --port 6d3a653f-6a26-4d96-9b0d-6bcc249543c4 103.6.252.17
Create the listener
Now we'll create a listener for our load balancer. For our example, we're going to create just one listener on TCP port 80 for HTTP.
$ openstack loadbalancer listener create --name my-test-listener --protocol HTTP --protocol-port 80 my-test-lb
+-----------------------------+--------------------------------------+
| Field | Value |
+-----------------------------+--------------------------------------+
| admin_state_up | True |
| connection_limit | -1 |
| created_at | 2020-05-14T04:21:31 |
| default_pool_id | None |
| default_tls_container_ref | None |
| description | |
| id | 0a5e5255-5976-456c-9202-8bb07ab2b506 |
| insert_headers | None |
| l7policies | |
| loadbalancers | edfebfcc-bc2c-49f9-9390-32a3ee2de8e3 |
| name | my-test-listener |
| operating_status | OFFLINE |
| project_id | 0bdf024c921848c4b74d9e69af9edf08 |
| protocol | HTTP |
| protocol_port | 80 |
| provisioning_status | PENDING_CREATE |
| sni_container_refs | [] |
| timeout_client_data | 50000 |
| timeout_member_connect | 5000 |
| timeout_member_data | 50000 |
| timeout_tcp_inspect | 0 |
| updated_at | None |
| client_ca_tls_container_ref | None |
| client_authentication | NONE |
| client_crl_container_ref | None |
| allowed_cidrs | None |
| tls_ciphers | |
+-----------------------------+--------------------------------------+
Create the pool
Create a pool for our load balancer. The pool handles connections to our backend servers. Once our servers have been built, we'll add them to this pool.
$ openstack loadbalancer pool create --name my-test-pool --protocol HTTP --listener my-test-listener --lb-algorithm ROUND_ROBIN
+----------------------+--------------------------------------+
| Field | Value |
+----------------------+--------------------------------------+
| admin_state_up | True |
| created_at | 2020-05-14T04:25:12 |
| description | |
| healthmonitor_id | |
| id | c00b7d96-410d-445c-b550-470c1fd87e7b |
| lb_algorithm | ROUND_ROBIN |
| listeners | 0a5e5255-5976-456c-9202-8bb07ab2b506 |
| loadbalancers | edfebfcc-bc2c-49f9-9390-32a3ee2de8e3 |
| members | |
| name | my-test-pool |
| operating_status | OFFLINE |
| project_id | 0bdf024c921848c4b74d9e69af9edf08 |
| protocol | HTTP |
| provisioning_status | PENDING_CREATE |
| session_persistence | None |
| updated_at | None |
| tls_container_ref | None |
| ca_tls_container_ref | None |
| crl_container_ref | None |
| tls_enabled | False |
| tls_ciphers | |
+----------------------+--------------------------------------+
Create the backend servers
Firstly we'll create a new security group opening up TCP port 80, we'll use this when booting our instances.
$ openstack security group create my-test-secgroup
$ openstack security group rule create --protocol tcp --dst-port 80 my-test-secgroup
Find the ID of the image we want to use for our servers. We pass this owner ID as a property to only show us Nectar officially supported images. We're going to choose Debian 9 for this example.
$ IMAGE_ID=$(openstack image list --property owner=28eadf5ad64b42a4929b2fb7df99275c \
--name 'NeCTAR Debian 9 (Stretch) amd64' -c ID -f value)
Look up our network ID, because we'll need the ID for attaching servers to the network.
$ NETWORK_ID=$(openstack network show -c id -f value my-test-network)
Now we use that information to create the backend servers. For this demonstration, we create a simple cloud-init configuration which will install the NGINX web server and a single PHP page for it to serve.
The index.php file will be served at the root of our web server and print the hostname and request addresses. Later, we'll be able to validate which backend server is serving our request to the load balancer, and the address information it receives in the HTTP request header.
Save this config as user-data.txt, and we'll use this file in to our server create step:
#cloud-config
packages:
- nginx
- php7.0-fpm
write_files:
- path: /tmp/default
owner: root:root
permissions: '0644'
content: |
server {
listen 80 default_server;
root /var/www/html;
index index.php;
server_name _;
try_files $uri $uri/ /index.php;
location ~ \.php$ {
try_files $uri =404;
include /etc/nginx/fastcgi.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
}
- path: /tmp/index.php
owner: root:root
permissions: '0644'
content: |
<html>
<head><title>Nginx-Server</title></head>
<body>
<h1>This is <?php echo gethostname(); ?></h1>
<?php foreach ($_SERVER as $name => $value) {
if (preg_match("/^.*_(PORT|ADDR|HOST)$/",$name)) {
echo "<p>$name: $value</p>\n";
}
} ?>
</body>
</html>
runcmd:
- mv /tmp/index.php /var/www/html/index.php
- mv /tmp/default /etc/nginx/sites-available/default
- service nginx restart
- service php7.0-fpm restart
Create the servers, and add them as members to our pool.
for NUM in `seq 1 3`; do
FIXED_IP="172.31.6.10${NUM}"
openstack server create --image $IMAGE_ID --flavor m1.small \
--availability-zone melbourne-qh2 --security-group my-test-secgroup --key-name my-key \
--nic net-id=$NETWORK_ID,v4-fixed-ip=$FIXED_IP --user-data user-data.txt \
my-test-server-$NUM
openstack loadbalancer member create --address $FIXED_IP --protocol-port 80 my-test-pool
done
Using the fixed IPs in this way is not necessary but assists automation. You can really use any instances you like from within the subnet; the important part is the neutron lbaas-member-create command, where you must give the correct IP to the --address argument.
At this point, you can test whether your LBaaS is working. Use cURL to connect to the load balancer floating IP address:
$ curl http://103.6.252.17/
<html>
<head><title>Nginx-Server</title></head>
<body>
<h1>This is my-test-server-1</h1>
<p>HTTP_HOST: 103.6.252.17</p>
<p>SERVER_PORT: 80</p>
<p>SERVER_ADDR: 172.31.6.101</p>
<p>REMOTE_ADDR: 172.31.6.11</p>
</body>
</html>
With subsequent connections, you should see that the backend server serving the request changes. You can also see that the remote address seen by pool members is the load balancer's VIP address.
Create the management host
You will probably need to have SSH access to the backend servers in order to administer them. You can't go through the load balancer for this since even if you set up a listener for ssh, it will not let you control which server you are connecting to. Instead, we set up a management host or jumpbox. This refers to a machine requiring very few resources, and having little purpose other than to offer a public-facing SSH service that can then be used to tunnel through to a private network.
We assume you already have a security group called ssh which opens up port 22 with a CIDR of 0.0.0.0/0.
$ openstack server create --image 8cdf754b-50a7-4845-b42c-863c52abea1b --flavor m1.small \
--availability-zone melbourne-np --security-group ssh --key-name my-key \
--nic net-id=$NETWORK_ID --wait my-test-server-mgmt
+-----------------------------+------------------------------------------------------------------------+
| Field | Value |
+-----------------------------+------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | melbourne-np |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2018-03-05T01:13:11.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | my-test-network=172.31.6.100 |
| adminPass | iVzjn8JW4vbS |
| config_drive | |
| created | 2018-03-05T01:12:42Z |
| flavor | m1.small (0) |
| hostId | f9a1738800713772fdeadbc7b70d624032035c519fa4606d26637389 |
| id | 0bfff024-31fb-4fe6-9933-6ab3414c465b |
| image | NeCTAR Debian 9 (Stretch) amd64 (8cdf754b-50a7-4845-b42c-863c52abea1b) |
| key_name | my-key |
| name | my-test-server-mgmt |
| progress | 0 |
| project_id | f42f9588576c43969760d81384b83b1f |
| properties | |
| security_groups | name='ssh' |
| status | ACTIVE |
| updated | 2018-03-05T01:13:12Z |
| user_id | 4b5fe43d1c324775b8e8ecdf7db492ad |
| volumes_attached | |
+-----------------------------+------------------------------------------------------------------------+
Create a public floating IP so we can connect to our jumpbox.
$ openstack floating ip create melbourne
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2018-01-10T05:05:45Z |
| description | |
| fixed_ip_address | None |
| floating_ip_address | 103.6.252.9 |
| floating_network_id | e48bdd06-cc3e-46e1-b7ea-64af43c74ef8 |
| id | 70536738-4f5d-4704-b992-fa681da43218 |
| name | 103.6.252.9 |
| port_id | 58243f0a-3864-4625-96ef-6b1773c69f8a |
| project_id | f42f9588576c43969760d81384b83b1f |
| revision_number | 16 |
| router_id | None |
| status | ACTIVE |
| updated_at | 2018-03-05T01:29:46Z |
+---------------------+--------------------------------------+
Look up the port ID for our management server and assign our floating IP to it.
$ MGMT_PORT_ID=$(openstack port list --server my-test-server-mgmt -c ID -f value)
$ openstack floating ip set --port $MGMT_PORT_ID 103.6.252.9
Access to the backend servers can then be accomplished with settings in your ssh client configuration similar to:
Host my-test-server-1
Hostname 172.31.6.101
User debian
ProxyJump debian@103.6.252.9
And then using the actual ssh command:
ssh my-test-server-1
Port Forwarding
We used port 80 at both the load balancer and the pool members in the example above. LBaaS also allows you forward requests from one port on the load balancer to another port on pool members. We can reconfigure our example to forward port 80 on the load balancer to port 8080 on the pool members as follows.
First, we need to create a new security group allowing port 8080 for the pool members. The load balancer will continue using the security group we created earlier for port 80.
$ openstack security group create my-test-pool-secgroup
$ openstack security group rule create --protocol tcp --dst-port 8080 my-test-pool-secgroup
Next, remove the current security group allowing port 80 from each of the pool members and add the new security group allowing port 8080 in its place. Repeat for all three members, my-test-server-1, my-test-server-2, and my-test-server-3 in our example.
$ openstack server remove security group my-test-server-1 my-test-secgroup
$ openstack server add security group my-test-server-1 my-test-pool-secgroup
Get the ids of each of the current pool members.
$ openstack loadbalancer member list my-test-pool
Delete each pool member using its id, then re-create all three with the existing backend server IP address (172.31.6.101-103) and new protocol port, 8080 in our case.
$ openstack loadbalancer member delete <member-id> my-test-pool
$ openstack lbaas-member-create --address <backend-ip> --protocol-port 8080 my-test-pool
The last step is updating the web server configuration on each pool member to listen on port 8080. SSH to each member, edit the NGINX configuration file, and restart the NGINX service.
$ ssh my-test-server-1
$ sudo vi /etc/nginx/sites-enabled/default
...
# listen 80 default_server;
listen 8080 default_server;
...
$ sudo service nginx restart
You can test all pool members are accessible using cURL, as we did earlier. You should see the server port in the response is now port 8080
$ curl http://103.6.252.17/
...
<p>SERVER_PORT: 8080</p>
...
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article