Streamlining Application Delivery with TCE and NSX Advanced Load Balancer on VMware Cloud on AWS

Sure! Here’s the new blog post based on the information provided:

Deploying Load-Balanced Tanzu Community Edition Clusters on VMware Cloud on AWS

================================================================================

In my previous blog post, I showed you how to automate the deployment of Tanzu Community Edition (TCE) clusters on VMware Cloud on AWS. However, that process did not include any load balancer integration. In this post, I will go through the manual steps required to deploy a load-balanced TCE cluster on VMware Cloud on AWS.

Prerequisites

————-

Before we begin, there are a few prerequisites you need to be aware of:

* You must have the VMware Cloud on AWS service deployed in your environment.

* You must have the NSX Advanced Load Balancer (AVI) controller installed and configured in your environment.

* You must have the Tanzu Community Edition (TCE) clusters deployed in your environment.

Creating Additional Network Segments

————————————

To deploy a load-balanced TCE cluster, we need to create three additional network segments: “aviMGMT”, “aviVIP”, and “workloads”. The aviMGMT network will be used by the AVI controller and the service engines. The aviVIP network will be used by the AVI controller to allocate the load-balanced virtual IP addresses that will ultimately point to the workloads.

The reference network configuration can be found in my previous blog post, but here’s a quick summary:

* “aviMGMT” network: 10.22.5.0/24

* “aviVIP” network: 10.22.6.0/24

* “workloads” network: 10.22.7.0/24

Enabling Three-Way Communication

——————————-

To enable three-way communication between the TCE cluster network, aviMGMT, and aviVIP through HTTP/HTTPS, we need to add firewall rules to allow traffic between these networks. Here are the firewall rules you need to add:

* Allow HTTP/HTTPS traffic from aviMGMT to workloads

* Allow HTTP/HTTPS traffic from aviVIP to workloads

* Allow HTTP/HTTPS traffic from workloads to aviMGMT and aviVIP

Configuring the AVI Controller

——————————

The AVI controller is a centralized brain for all load balancing operations. It has visibility across the environments and (in an on-premises environment) automates the deployment and management of the load balancing endpoints, which are known as Service Engines. Unfortunately, in VMware Cloud on AWS, the AVI Controller does not automate the configuration and management of Service Engines due to a lack of permissions on the cloudadmin@vmc.local user. However, we can still use the AVI controller to configure our load balancer.

To deploy the AVI controller, download the AVI controller OVA file from customer connect. When deploying the AVI Controller into vSphere, you will need to set up some parameters, such as its static IP address, default gateway, and subnet mask. Some other parameters will automatically be filled in by the NSX Manager. Select a static IP address from your “avi-mgmt” network outside of DHCP range and assign it to the AVI controller.

Defining the VIP Network

————————-

When defining the VIP network, we also assign it a static IP pool range, which will be used by the AVI controller to assign virtual IPs to the services we want to load balance. Here’s an example:

“`yaml

avi_vip {

network_id = “avi-vip”

cidr = “10.22.8.0/24”

}

“`

Defining Infrastructure with terraform

———————————-

To define our infrastructure, we use the `avi_cloud` resource. Here’s an example:

“`yaml

resource “avi_cloud” “default-cloud” {

provider = “aws”

avi_vip {

network_id = “avi-vip”

cidr = “10.22.8.0/24”

}

}

“`

On the UI, it will look like this:

Alternatively, you can use terraform to extract the certificate as a data object:

“`yaml

data “avi_certificate” “admin” {

path = “/opt/vmware/ssl/certs/avi-cert.pem”

}

“`

Creating a Secret for Avi Administrator Log-in Credentials

————————————————–

When deploying the clusters, we might sometimes see that the load-balancer-and-ingress-service fails to reconcile. After digging deeper into this issue, I found out that this happens because the ako-0 pod is looking for a secret named “avi-secret” which contains the AVI administrator log-in credentials, and which did not get created. To resolve this issue, create a secret with the following content:

“`yaml

apiVersion: v1

kind: Secret

metadata:

name: avi-secret

data:

admin:

type: Opaque

“`

That’s it! With these manual steps, you should now have a load-balanced TCE cluster deployed on VMware Cloud on AWS.