Container Orchestration Service

Modified on Tue, 11 Jun at 11:51 AM

The Nectar Container Orchestration Engine (COE) service provides support for provisioning Kubernetes clusters as a service on the Nectar Research Cloud using OpenStack Magnum. This allows a user to spin up a container cluster (Kubernetes) on Nectar Research Cloud.


Learn more here.


Known Issues

Cluster stop working when user has left the project

When a Kubernetes cluster is created, the creator's credential is linked to and embedded into the cluster. This is so that Kubernetes can use the user's credential to create OpenStack resources like LoadBalancers and Volumes on behalf of the user. In more technical detail, a user (trustee) is created, and a trust is created linking the creator (trustor) to the trustee.


When a user leaves the project, or leave their organisation, their credential will become invalid. This will result in the Kubernetes cluster being unable to manage OpenStack resources, e.g. unable to create new LoadBalancers or Persistent Volumes.


To avoid this issue, it is recommended to use Robot Account for long running clusters.


Updates

11-06-2024 OVN is now the default load balancer provider type

Starting from today, 'ovn' is now the default load balancer provider type. This is a change from the previous provider type of 'amphora'


Features

  • Load balancer provisioning is much more stable and consistent, as no amphoras need to be created.
  • Reduces cross availability zone traffic. Previously, load balancers may be created in a different AZ from where your cluster is.

Limitations

  • It only supports Layer 3 load balancing. So Layer 7 load balancing features like SSL termination is not possible.
  • It only supports SOURCE_IP_PORT algorithm currently.
  • For more information, refer to Octavia OVN documentation.


Cluster Templates

Currently the cluster templates that defaults to OVN ocatavia provider type are

  • kubernetes-melbourne-qh2-v1.28.8
  • kubernetes-melbourne-qh2-uom-v1.28.8
  • kubernetes-monash-01-v1.28.8
  • kubernetes-monash-02-v1.28.8
  • kubernetes-intersect-v1.28.8
  • kubernetes-tasmania-v1.28.8
  • kubernetes-auckland-v1.28.8
  • kubernetes-QRIScloud-v1.28.8
  • kubernetes-ardc-mel-1-v1.28.8
  • kubernetes-swinburne-01-v1.28.8


In future, you can identify the cluster templates by the following labels:

'octavia_provider': 'ovn'


26-02-2024 EOL Kubernetes v1.23

We are EOL Kubernetes v1.23 Cluster Templates. These templates will now be hidden and/or deleted. 


Kubernetes v1.23 has been EOL upstream on 28 Feb 2023, and we are unable to provide support for it anymore.


Please update to one of the Kubernetes versions still in active support.


20-02-2024 CVE-2024-21626

There is a vulnerability affecting containerd and runc used in Magnum Cluster. Please see the CVE-2024-21626 support page for more information. 


06-01-2024 Kubernetes v1.27 Conformance

The Cluster Templates for Kubernetes v1.27 are now certified.


23-09-2023 Certified 

We have achieved CNCF Conformance for the Magnum service on ARDC Research Cloud!


The conformance test enables Nectar to ensure our Magnum service is up to standard, and allows for interoperability between Kubernetes services provided by different vendors.


This allows users to be sure that Kubernetes clusters created by Magnum can work with the wide variety of software available on the Internet. It also assures users that their platforms, services and applications built on Nectar Cloud can work in other Clouds.


18-08-2022 Magnum Yoga

Magnum has been upgraded to Yoga. This fixes several stability issues in Magnum. A list of changes is below:

  • Supports Kubernetes v1.23.8 with Fedora CoreOS 35
  • Defaults to containerd instead of docker. In preparation for the removal of dockershim in Kubernetes v1.24, the default kubernetes-*-1.23.8 templates now runs with containerd by default. See this blog post by Kubernetes for more information. We have decided to default to containerd so users can be more prepared for the v1.24
  • If you are planning to move to Kubernetes v1.23 from v1.17.11 or v1.21.1 templates, please note that several deprecated APIs have been removed in v1.22. You should check your manifests for usage of deprecated API before moving to v1.23.
  • Update versions of several Kubernetes / OpenStack plugins with the v1.23.8 templates
  • For the updated plugins, please follow the respective links for Changelogs. These updates fixes bugs and improves the stability between Kubernetes and OpenStack. For the CSI plugins (attacher/provisioner/etc), the respective versions can be seen in Cluster Template labels.
  • As upstream is dropping support for Apache Mesos and Docker Swarm, we will also be limiting our support to only Kubernetes. We strongly encourage users to use Kubernetes; if you require assistance please feel free to reach out with a ticket, and we will be glad to support you on your Kubernetes journey.


Old issues

Flannel not working

(this has been fixed in new cluster templates)


With the kubernetes-*-v1.21 cluster templates, flannel has a bug where it will not work properly on first boot, or on host reboot. You will need to kill the flannel pods if this happens.

kubectl -n kube-system delete pod -l app=flannel

This is a bug in systemd and Fedora CoreOS 32. The fix is to use kubernetes-*-1.23 cluster templates, which uses Fedora CoreOS 35.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article