How to clean up NSX Advanced Load Balancer following replacement of a failed Tanzu control plane node

How do I clean up a missing control plane node in the Avi load balancer console?

This post outlines an approach I used to solve a problem which has occurred in several environments I’ve worked in recently. I haven’t seen a similar set of instructions anywhere yet, but it doesn’t mean that they are the only way to solve the problem. Check with VMware Support if you’re having a production problem, don’t follow this guidance without properly understanding the type of problem which you’re experiencing.


If you have found this page because you’re stuck with a similar problem it is probably because one or more of your control plane nodes in a Tanzu Kubernetes Grid (TKG) cluster have failed and been replaced automatically leaving a broken IP pool entry in NSX Advanced Load Balancer user interface.

For example, you log in and find that one of your IP pools which define the control plan endpoints are offline (shown as 3/4 servers up).

Clicking into the cluster will provide further detail of the missing control plane endpoints

In this case, one of the existing control plane nodes (172.20.11.45) became frozen and went offline , eventually losing its DHCP lease before it could be converted into a permanent reservation. Tanzu’s vSphere integration automatically provisioned a new node, and the old IP address now belongs to a new VM somewhere outside of Tanzu.

However, despite this situation occurring some days previously the Avi Kubernetes Operator (ako) has not cleaned up, perhaps expecting that the VM might be recovered eventually.

If you’re in a similar situation you will now know the name of the environment and should be able to determine the IP addresses of your current control plane nodes still:

kubectl config use-context [name of your management cluster context]
kubectl get nodes -o wide

In this case we are only interested in the IP addresses belonging to nodes having the control-plane node (the first three in the output below).

There aren’t any more ‘missing’ control plane endpoints shown above, so Kubernetes appears satisfied that it is in a workable state.

As a validation, check that the endpoints listed within the Kubernetes service map onto the current working list of nodes.

List the endpoints for the Kubernetes service (in default namespace)

kubectl get ep kubernetes -o json

The JSON output above is quite simple to read vertically, and confirms that there are three IP addresses within a subset of endpoints serving the Kubernetes API service on port 6443 (via the Avi Load Balancer vserver) that is defined in your ~/.kube/config file.

These match the output which the NSX Advanced Load Balancer showed previously.


What puzzled me for a very long time now seems obvious, that you cannot edit/remove any defunct entries from the Avi IP pool using the UI because the operator synchronises the list of endpoints for each service. By fixing the condition in Kubernetes the operator will take care of the content of the pool itself.

This is the way.

Obtain the list of services in the tkg-system namespace

kubectl get svc -n tkg-system

Now use the cluster-specific named control plane service to output the list of endpoints for the control plane

Aha, there’s the 172.20.11.45 control plane node which no longer exists in the cluster.

Edit the endpoint and manually remove the missing address from the subset addresses section

kubectl edit ep [tkg-system-tkg-mgmt-projit-control-plane] -n tkg-system

Using the VI editor remove the two lines declaring the ip and nodeName entries for the missing cluster node

Close the file and save the changes, the endpoint will be updated.

Refresh the Avi load balancer UI and if everything is well the pool will be updated dynamically when the ako operator detects the updated list of endpoints.


Further information confirming the status update is reflected in the ako-0 pod logs, which shows that a change has been detected between the cached copy of the virtual server object and the updated relationship which is computed from the graph database.

kubectl get logs ako-0 -n avi-system

It then resynchronises the pool content with Avi.

I’d be very pleased to hear if you run into a similar scenario, as I do not think that this element of ako is described anywhere in the official documentation of either Tanzu or AKO – and the DHCP lease re-issue will often crop up if an admin did not take care of making a permanent reservation after a node is added. Often this is because Tanzu will discover a broken node and intervene without anyone being aware of the problem, but this does not always make sense if addresses are not reserved permanently by default in your subnet.

Thanks for reading –

How much space does an air gap installation of Tanzu TKG 2.1.1 need?

In a follow up post to how-much-space-does-an-air-gap-installation-of-tanzu-tkg-1-6-0-need I thought it would be useful to expand on the initial summary to include an upgrade to TKG 2.1.1.

In the previous 1.6.0 example there was a total of 157 images (881 artifacts) requiring 9.7GB of storage space. However the download process has been modified and doesn’t use a shell script to download files for an air gap registry anymore, but rather a command such as:

tanzu isolated-cluster download-bundle --source-repo projects.registry.vmware.com/tkg --tkg-version v2.1.1

This results in 244 tar files being downloaded for a single version of TKG and 45GB of space needed.

When these tar files are uploaded I experienced several problems caused by a redis bug when using Harbor 1.10.x, and the upload command only succeeded once I had upgraded to Harbor 2.5.0.

tanzu isolated-cluster upload-bundle --source-directory ./ --destination-repo registry.sbcpureconsult.internal/tkg --ca-certificate /tmp/ca.crt

In total (for the combination of both TKG 1.6.0 and 2.1.1 releases) there are a total of 177 repositories requiring 20.58GB of storage space.

If I subtract the two figures from one another it indicates that TKG 2.1.1 requires 10.88GB in total.

How much space does an air gap installation of Tanzu TKG 1.6.0 need?

I have implemented several air gapped installations of Tanzu Kubernetes Grid 1.6 now using Harbor registry so thought it might be worth recording how many images are stored and the space required.

Example clean registry with only TKG 1.6 files

Short on time? I should caveat that my results only record the space needed for a single version of Kubernetes (1.23.8). This is the newest supported build of Kubernetes in the TLG 1.6.0 release.

During the air gap installation it is possible to reduce the file set required to be stored in your registry by extracting the Bill of Materials for a specific version only:

export DOWNLOAD_TKRS="v1.23.8_vmware.2-tkg.1"

In total (for this specific release) there are 157 images (881 artifacts) requiring 9.7GB of storage space.

I have tested the deployment of a management and worker cluster from the air gap registry and confirm successful installation.

Over time you may accumulate older versions in your registry which are no longer required, however there’s not information available currently on how you could reduce the number of images stored – so I would recommend keeping the image-copy file produced during each iteration of the air gap registry preparation phase so that you could remove them manually at a later date.