CVE-2024-21626 affecting Container Orchestration Engine (Magnum)

Modified on Wed, 21 Feb at 11:21 AM

Description

In runc 1.1.11 and earlier, due to an internal file descriptor leak, an attacker could cause a newly-spawned container process (from runc exec) to have a working directory in the host filesystem namespace, allowing for a container escape by giving access to the host filesystem ("attack 2"). This vulnerability is tracked as CVE-2024-21626.

These vulnerabilities not only enable malicious actors to escape containerised environments but also allow for full control over the underlying host system.


Please see Synk Blog for more details.


Services Affected

Magnum Clusters built before 2024-02-07 are affected. To check whether you are affecting, run the following kubectl command

$ kubectl get nodes -o wide

Look for the field CONTAINER-RUNTIME. If it shows a runtime <1.6.28, it is affected. E.g.


Affected - containerd://1.6.20

Not affected - containerd://1.6.28



New Clusters

We have created new Cluster templates for the supported versions of Kubernetes. 


The following cluster templates are have been updated

- kubernetes-<AZ>-v1.27.6

- kubernetes-<AZ>-v1.26.8


Cluster templates for EOL Kubernetes versions have not been updated

- kubernetes-<AZ>-v1.25.9

- kubernetes-<AZ>-v1.23.8


If you are able to create a new cluster and migrate your workload to it, you will be protected


Otherwise, follow these instructions for patching containerd and runc manually.


Mitigation for existing Clusters

Upgrade to containerd 1.6.28 and runc 1.1.12. The following steps update both components.


Accessing your Magnum cluster nodes

You will need SSH access to all the nodes previously identified. There are a few ways to do it.


1. Use a VM as a jumpbox

First identify the Tenant Network your Kubernetes nodes are on. It should have the same name as your Cluster.


Create a VM with two interfaces, 'Classic Networking' and 'Tenant Network'. SSH into this VM, then SSH to your Kubernetes' node IP.


2. Use a Floating IP

Using the dashboard, navigate to Projects > Network > Floating IP. Select 'Allocate IP to Project'. Choose the Pool that is the same AZ as your Kubernetes Cluster.


For example, if you have created a cluster using the 'kubernetes-auckland-v1.27.6' Cluster Template, select the 'auckland' pool.


NOTE: This way exposes your Kubernetes nodes directly to the Internet. It may also change your egress IP. To limit any impact, remove the floating IP as quickly as possible after use.


Updating containerd and runc

1. Select one of the Kubernetes nodes (can be either one of master or worker nodes), please apply the following procedure to drain and concord the targeted node to allow for containerd and runc updates.


$ kubectl drain <targeted-k8s-node-x> --force --ignore-daemonsets --delete-emptydir-data

2. SSH into the node. Download cri-containerd-cni-1.6.28. You may validate all the downloaded files against their associated checksum files.


$ curl -LO "https://github.com/containerd/containerd/releases/download/v1.6.28/cri-containerd-cni-1.6.28-linux-amd64.tar.gz"


3. Using systecmctl, stop kubelet.service and kube-proxy.service prior to updating containerd, runc and CNI plugin software.

$ sudo systemctl stop kube-proxy.service kubelet.service

$ sudo tar xzvf cri-containerd-cni-1.6.28-linux-amd64.tar.gz -C / --no-same-owner --touch --no-same-permissions

$ sudo systemctl start kube-proxy.service

$ sudo systemctl start kubelet.service

$ systemctl list-units --type=service


3. Exit from the targeted Kubernetes node, uncordon the updated the runc and containerd node with kubectl command. Please check and verify that the cluster is in healthy state.


$ kubectl uncordon <targeted-k8s--node-x>

$ kubectl get nodes

$ kubectl get pods --all-namespaces


Repeat steps above for all Kubernetes cluster nodes please ensure your updated K8s cluster is in healthy state after all nodes containerd, runc and CNI plugins are updated.


Clean up

Delete the VM, or detach and delete the Floating IP when you are done.


NOTE

As containerd is installed via a untar command when the node boots, there are no packages to easily update them. This places a burden on operators of the clusters when such upgrades need to be done.


To ease users' burden, we are working on a ClusterAPI driver for Magnum. When this is completed, it should significantly help with upgrades. Stay tuned!

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article