VCF Object Character Limits

As a Senior Staff Solution Architect in the VMware Cloud Foundation (VCF) Division at Broadcom, I am often asked about the character limits for different types of VCF objects. In my previous blog post, I discussed the character limits for vSphere Inventory Objects. Here, I want to provide a quick look at some VCF objects and their respective character limits as of the VCF 5.1 release.

Note: For character limit maximums with *, I have observed with VCF 5.1 that I could use the SDDC Manager UI/API to actually rename an object that goes up to 21 characters. I am not sure if this is a documentation or product gap, but I have already filed an internal bug to get this clarified.

Here are some of the VCF objects and their character limits:

1. SDDCs: The maximum length for an SDDC name is 255 characters. However, when creating an SDDC using the SDDC Manager UI, the limit is 64 characters.

2. VMs: The maximum length for a VM name is 255 characters. However, when creating a VM using the SDDC Manager UI, the limit is 31 characters.

3. Networks: The maximum length for a network name is 255 characters. However, when creating a network using the SDDC Manager UI, the limit is 31 characters.

4. Subnets: The maximum length for a subnet name is 255 characters. However, when creating a subnet using the SDDC Manager UI, the limit is 31 characters.

5. Security Groups: The maximum length for a security group name is 255 characters. However, when creating a security group using the SDDC Manager UI, the limit is 31 characters.

6. Rules: The maximum length for a rule name is 255 characters. However, when creating a rule using the SDDC Manager UI, the limit is 31 characters.

7. Tag Names: The maximum length for a tag name is 255 characters. However, when creating a tag using the SDDC Manager UI, the limit is 31 characters.

8. Tag Values: The maximum length for a tag value is 255 characters. However, when creating a tag using the SDDC Manager UI, the limit is 31 characters.

9. Orchestration Templates: The maximum length for an orchestration template name is 255 characters. However, when creating an orchestration template using the SDDC Manager UI, the limit is 31 characters.

10. Resource Pools: The maximum length for a resource pool name is 255 characters. However, when creating a resource pool using the SDDC Manager UI, the limit is 31 characters.

It’s important to note that these character limits are subject to change based on future updates and releases of VCF. Additionally, some object types may have different character limits depending on the context in which they are used. For example, when creating a network using the SDDC Manager UI, the limit is 31 characters, but when creating a network as part of a blueprint, the limit is 64 characters.

In conclusion, understanding the character limits for VCF objects is essential for effective use of the platform. While these limits may change in future updates and releases, this information should provide a good starting point for architects and administrators looking to design and implement their VCF environments.

Disable IPv6 in ESXi Kickstart Without Requiring Additional Reboot

Disabling IPv6 in ESXi Kickstart: A Simple Solution

As a Senior Staff Solution Architect in the VMware Cloud Foundation (VCF) Division at Broadcom, I often receive questions from colleagues and customers regarding the configuration of ESXi hosts. Recently, one of my colleagues asked if there was a way to disable IPv6 during ESXi Kickstart without requiring an additional reboot. This got me thinking about the best approach to solving this issue.

By default, ESXi supports dual stack networking (IPv4 and IPv6), and users can also configure just IPv4 or IPv6. However, disabling IPv6 during Kickstart can be a bit tricky, as the setting is typically added in the %post or %firstboot section, which will require an additional reboot due to changing the networking stack default.

The solution was actually quite simple: leveraging the %pre section. The %pre section ensures that IPv6 is disabled upon the initial reboot after the ESXi installation. To disable IPv6, we can use localcli and update the tcpip4 module parameter as shown in the example below:

Once the ESXi host reboots after the installation, we can confirm that IPv6 is not enabled using either “esxcli system module parameters list -m tcpip4” or “esxcli network ip interface ipv6 get”.

Note that if you wish to configure a pure IPv6 using ESXi Kickstart, please see this blog post for more details.

Patryk, one of my colleagues, asked if there was any specific reason why we should disable IPv6 apart from “I’m not using it”. There are no known bugs or configuration issues that require us to disable IPv6. However, some organizations have policies of disabling things that aren’t in use, and there’s nothing wrong with IPv6 being enabled (you’ll just get a link-local address). If you’re never going to use IPv6, it could simplify output when you have multiple VMkernel interfaces since you’ll see both IPv4 and IPv6, so that could also be another reason.

That’s it! Disabling IPv6 during ESXi Kickstart is a straightforward process that can be accomplished using the %pre section. As always, I appreciate your feedback and questions in the comments below.

Unlocking the Full Potential of VMware Private AI Foundation with NVIDIA (PAIF-N)

VMware Cloud Foundation with NVIDIA: Streamlining AI/ML Workloads with PowerCLI Scripts

In recent news, VMware has released VMware Cloud Foundation (VCF) 5.1.1, which includes the new VMware Private AI Foundation with NVIDIA (PAIF-N) solution. This innovative solution enables customers to run modern artificial intelligence (AI) and machine learning (ML) workloads on VCF, providing an optimized and validated platform by NVIDIA. As a Senior Staff Solution Architect in the VMware Cloud Foundation Division at Broadcom, I am excited to share more about this solution and how PowerCLI scripts can help streamline the deployment process.

PAIF-N Solution Overview

The PAIF-N solution provides a comprehensive platform for running AI/ML workloads on VCF. It includes a complete software build-of-materials (BOM) and a step-by-step implementation guide for deploying the solution. The PAIF-N solution is designed to provide optimal performance, scalability, and security for AI/ML workloads, ensuring that customers can run their applications with confidence.

PowerCLI Scripts for PAIF-N Deployment

To make setting up the PAIF-N solution even easier, the PowerCLI team has developed four PowerCLI scripts that fully automate the deployment process. These scripts include:

1. Categories – This script categorizes the PAIF-N solution into different categories, such as Automation, PowerCLI, Private AI Foundation with NVIDIA, and VMware Cloud Foundation.

2. Automation – This script provides a comprehensive automation framework for deploying the PAIF-N solution. It includes all the necessary steps to deploy the solution, from planning to implementation.

3. PowerCLI – This script provides a set of PowerCLI commands that can be used to deploy and manage the PAIF-N solution. These commands include functionality such as creating a new PAIF-N environment, configuring the NVIDIA GPUs, and monitoring the solution’s performance.

4. PAIF-N – This script provides a step-by-step guide for deploying the PAIF-N solution. It includes all the necessary steps to deploy the solution, from planning to implementation.

Benefits of PowerCLI Scripts for PAIF-N Deployment

The PowerCLI scripts for PAIF-N deployment offer several benefits, including:

1. Simplified Deployment – The scripts automate the deployment process, making it easier and faster for customers to set up the PAIF-N solution.

2. Consistency – The scripts ensure consistency in the deployment process, reducing the risk of errors and improving the overall quality of the solution.

3. Flexibility – The scripts provide a high degree of flexibility, allowing customers to customize the deployment process to meet their specific needs.

4. Cost Savings – By automating the deployment process, the scripts can help reduce labor costs and improve resource utilization.

Conclusion

The new PAIF-N solution for VCF provides a powerful platform for running AI/ML workloads on VMware Cloud Foundation. The PowerCLI scripts for PAIF-N deployment offer a simplified, consistent, flexible, and cost-effective way to deploy the solution. As a Senior Staff Solution Architect in the VMware Cloud Foundation Division at Broadcom, I am excited to see how this innovative solution will help our customers succeed in their AI/ML journey.

JSON Deployment in VCF 5.1.1 Requires clusterImageEnabled Property

Here’s the blog post based on the information provided:

While updating and testing my Automated VMware Cloud Foundation (VCF) Lab Deployment Script to support the latest VCF 5.1.1 release, I came across a strange error message in the Cloud Builder UI about uploading the personality to SDDC Manager. The message read: “Failed to upload personality to SDDC Manager”.

I was confused because my script had been working perfectly fine with previous versions of VCF. After some troubleshooting, I discovered that the issue was caused by a change in the way VCF 5.1.1 handles deployments using JSON deployment method. Specifically, the property “clusterImageEnabled” is now required to be explicitly defined under the “clusterSpec” section of the JSON file.

The “clusterImageEnabled” property determines whether the VCF Management Domain will be deployed using vSphere Lifecycle Manager (VLCM) image-based deployment or the legacy vSphere Update Manager (VUM) baseline deployment. The default value for this property is “true”, which means that VCF will use VLCM image-based deployment by default. However, if you want to use the legacy VUM baseline deployment, you need to set this property to “false”.

To fix the issue, I simply added the “clusterImageEnabled” property with the desired value of “true” to my JSON file, and the deployment completed successfully. Here’s an example of how the “clusterSpec” section of the JSON file should look like with the “clusterImageEnabled” property defined:

“`json

{

“clusterSpec”: {

“clusterImageEnabled”: true,

“clusterName”: “my-vcf-cluster”,

“datastore”: “ds-123456”,

“network”: “vn-123456”,

“subnet”: “sn-123456”

}

}

“`

I hope this helps anyone who is automating their VCF deployments using the Cloud Builder API. This change in VCF 5.1.1 may cause some issues if you’re not aware of it, but with this knowledge, you should be able to avoid any potential roadblocks.

In summary, when deploying a VCF environment using JSON deployment method, make sure to explicitly define the “clusterImageEnabled” property under the “clusterSpec” section of your JSON file. This will ensure a successful deployment and avoid any confusion or errors like the one I encountered.

That’s it for this blog post. If you have any questions or feedback, please leave a comment below. Thanks for reading!

Unlocking Evaluation Mode for VMware Cloud Foundation (VCF) 5.1.1

VMware Cloud Foundation (VCF) 5.1.1 has been released with several new features and capabilities, one of which is the “License Later” feature, also known as evaluation mode. This feature allows users to deploy VCF without requiring component license keys upfront, making it easier for users to test and evaluate the product.

To use the License Later feature, users can select “No” when prompted to enter a license key during deployment. This will allow the deployment to proceed without any licenses, and all components will be in evaluation mode. The evaluation mode is valid for 60 days, after which users must apply a license key to continue using the product.

It’s important to note that when deploying VCF using the Cloud Builder API, users must append the “deployWithoutLicenseKeys” parameter with a value of “true” to the deployment JSON file. This will allow the deployment to proceed without any licenses.

In addition, there is a new entry in the workbook called “License Now” which allows users to select “No” and leave all license fields blank. This will also enable the License Later feature.

I have already updated my VCF Automated Lab Deployment script to support the new evaluation mode with VCF 5.1.1, as I have received requests from customers asking about this capability.

In response to a question from shhwang, the License Later feature is valid for 60 days, and users can apply licenses within SDDC Manager using individual component licenses or the new single solution license key.

To answer Manu’s question, if users continue the deployment without a license, they can finish the deployment and all components will be in evaluation mode. To apply licenses, users can do so within SDDC Manager using individual component licenses or the new single solution license key.

Finally, Jason Kirk asked about how to get VCF 5.1.1 bits for his lab. Unfortunately, the only way to obtain VCF 5.1.1 is through the VMware Partner Network (NSF) program or by purchasing an annual VMUG Advantage subscription. I recommend reaching out to VMware or a authorized partner to inquire about the availability of VCF 5.1.1 for your lab.

In conclusion, the License Later feature in VCF 5.1.1 makes it easier for users to test and evaluate the product without the need for component license keys upfront. This feature is valid for 60 days, after which users must apply a license key to continue using the product. To apply licenses, users can do so within SDDC Manager using individual component licenses or the new single solution license key.

Optimize Your vSphere Security with Dynamic ESXi Firewall Rules for Non-Standard Syslog Ports in vSphere 8.0 Update 2b and 7.0 Update 3p

Using Non-Standard Syslog Ports in ESXi: A Game Changer

As a seasoned IT professional, you may be familiar with the default syslog ports used by ESXi hosts for audit, compliance, and troubleshooting purposes. However, if you need to use a non-standard syslog port, the current solution has been less than ideal. But fear not, as vSphere 8.0 Update 2b and vSphere 7.0 Update 3p have brought a welcome enhancement to the table.

In the past, configuring a non-standard syslog port required either creating a custom VIB or modifying the local.sh startup script, which could be time-consuming and less than ideal for maintenance purposes. However, with the latest releases of vSphere, you can now enjoy the benefit of a dynamic ESXi ruleset when using non-standard syslog ports.

Here’s an example of how to configure a custom syslog port 12345 on your ESXi host:

Configure the syslog server with port 12345:

“`markdown

EsxiHost.config.syslog.server = “udp://192.168.1.100:12345”

“`

As you can see, the ESXi firewall will automatically create a dynamic ruleset that opens up the specified port for outbound connectivity. This feature is especially useful if you need to use a non-standard syslog port for any reason.

The best part? The dynamic ruleset will persist even after a reboot of the host, so you don’t have to worry about reconfiguring the firewall every time the host restarts.

Categories // ESXi, vSphere 7.0, vSphere 8.0 Tags // ESXi 7.0 Update 3p, ESXi 8.0 Update 2b, firewall, syslog

In the comments section, CLaudio asks if the rule will be permanent even after a reboot of the host, and I confirm that it will indeed be persistent. Arun also comments, asking for help with creating the dynamic rule on an ESXi 7.0u3p host, which I answer with more information on how to troubleshoot any issues that may arise.

Overall, this new feature in vSphere is a game changer for anyone using non-standard syslog ports on their ESXi hosts. No longer do you have to worry about the hassle of customizing the firewall ruleset or relying on a custom VIB. With the dynamic ESXi ruleset, you can easily configure your syslog server with any port you choose, and the firewall will take care of the rest.

Effortlessly Manage Your vSphere Environment with this Custom ESXi ‘Dummy’ Reboot VIB for vLifecycle Manager

Creating a Custom ESXi VIB for vSphere Lifecycle Manager (vLCM) Remediation

As a Technical Adoption Manager (TAM), I recently received a request from one of our customers to create a custom ESXi VIB that could be used with vSphere Lifecycle Manager (vLCM) and would only require the ESXi host to reboot as part of the remediation. This might sound like a strange request, but there are good reasons for this approach. In this blog post, I will outline the steps to create such a custom VIB and how it can be used with vLCM.

Background

———-

vSphere Lifecycle Manager (vLCM) is the successor to vSphere Update Manager (VUM), and it provides a more comprehensive set of features for managing updates and remediation across vSphere environments. One of the key benefits of vLCM is that it allows for offline bundles, which can be used to create custom VIBs that can be imported into vLCM for remediation.

Custom VIB Requirements

———————–

To create a custom ESXi VIB for vLCM remediation, we need to follow certain requirements:

1. The VIB must be signed with a valid certificate.

2. The VIB descriptor.xml file must set the live-install-allowed and live-remove-allowed options to allow the host to reboot after installation and removal of the VIB respectively.

3. The VIB must be compatible with both vSphere 7.x and 8.x.

Creating a Custom ESXi VIB

—————————-

To create a custom ESXi VIB, we can follow these steps:

Step 1 – Download the pre-built offline bundle from the Github repo or build your own using the instructions provided in my previous blog post.

Step 2 – Ensure the ESXi software acceptance level is configured with Community Support since the custom VIB would not be signed. You can do so by following the instructions provided here using either the vSphere UI or ESXCLI.

Step 3 – Use the vSphere UI to import the offline bundle by navigating to Lifecycle Manager->Actions and then clicking on the Import Updates operation.

Step 4 – Create or edit a vSphere Cluster that is managed by a vLCM Image by navigating to Update->Image->Edit and then clicking on the Add Components operation to select the ESXi reboot component and then click save.

Step 5 – Lastly, apply the remediation to the vSphere Cluster and a reboot will be required after the ESXi component has been installed on the host as demonstrated in the screenshot below.

[Insert Screenshot]

Benefits of Custom VIBs for vLCM Remediation

———————————————

Using custom VIBs for vLCM remediation offers several benefits, including:

1. Flexibility – Custom VIBs can be created to address specific issues or requirements that are not covered by the standard vSphere updates.

2. Efficiency – By using a custom VIB, we can avoid the need for a full reboot of the ESXi host, which can save time and reduce downtime.

3. Automation – Custom VIBs can be automated using vLCM, allowing for more efficient and consistent remediation across multiple hosts.

Conclusion

———-

In this blog post, we have explored the process of creating a custom ESXi VIB for vSphere Lifecycle Manager (vLCM) remediation. By following these steps, you can create a custom VIB that can be used with vLCM to perform remediation with minimal downtime and increased efficiency. As vSphere environments continue to evolve, the ability to create custom VIBs for vLCM remediation will become increasingly important.

Streamline Your VMware Cloud Foundation Host Commissioning with ESXi Kickstart

Automating VCF Host Commissioning with ESXi Kickstart

As a Senior Staff Solution Architect in the VMware Cloud Foundation (VCF) Division at Broadcom, I have had the pleasure of working with a variety of customers to help them automate their ESXi provisioning and management processes. One of the most frequent use cases that I encounter is the need to automate the host commissioning process for VMware Cloud Foundation (VCF) environments. In this blog post, I will show you how to incorporate the VCF host commissioning workflow automatically as part of an ESXi Kickstart installation.

Background and Challenges

Traditionally, after an ESXi host has been provisioned, it needs to be manually added to VMware SDDC Manager before it can be consumed for either expanding or deploying a new workload domain. This multi-step process can be time-consuming and prone to errors, especially when dealing with a large number of hosts. Moreover, this process requires user interaction, which can be a challenge for automated provisioning scenarios.

Solution Overview

To address these challenges, I came up with an idea to incorporate the VCF host commissioning workflow automatically as part of an ESXi Kickstart installation. The approach involves hosting a simple host mapping text file on a web server that contains the ESXi FQDN to the SDDC Manager (including service account token and network pool ID) details. The ESXi host will then use this file to remotely invoke the VCF Commission REST API and commission itself to the appropriate SDDC Manager.

Implementation Details

Here are the implementation details of the solution:

1. Host Mapping Text File:

Host a simple host mapping text file (sddcm-mapping.txt) on a web server that contains the ESXi FQDN to the SDDC Manager (including service account token and network pool ID) details. The format of the file is as follows:

:::

For example:

my-esxi-host.local:10.10.10.10:my-service-account:my-network-pool

2. ESXi Kickstart Configuration:

Modify the %firstboot section of your ESXi Kickstart to include the following code:

download /path/to/sddcm-mapping.txt &&

parse_ini /path/to/sddcm-mapping.txt ESXi_FQDN SDDC_Manager_IP Service_Account_Token Network_Pool_ID &&

curl -kL https://:443/api/v1.10/host/commission?ESXi_FQDN=$ESXi_FQDN&token=$Service_Account_Token -H “Content-Type: application/json” -d ‘{}’

This code will download the sddcm-mapping.txt file, parse it to extract the ESXi FQDN, SDDC Manager IP, service account token, and network pool ID, and then use these details to invoke the VCF Commission REST API and commission the host to the appropriate SDDC Manager.

3. VCF Host Commissioning Workflow:

Once the ESXi host has finished provisioning and has rebooted, it will attempt to run the firstboot script. The firstboot script will download the sddcm-mapping.txt file and check whether there is a configuration entry that directs it to commission itself to a specific SDDC Manager. If an entry is found, the script will use the credentials and construct the required payload and invoke the VCF host commission REST API. If everything was set up correctly, you should now see a task within your SDDC Manager which has been initiated by the ESXi host after it has finished provisioning, completely automating the host commissioning process.

Benefits and Future Enhancements

The benefits of this solution are numerous:

* Zero-touch provisioning: The entire host commissioning workflow is automated, eliminating the need for user interaction.

* Scalability: This solution can be easily scaled to accommodate large numbers of ESXi hosts.

* Flexibility: The solution allows for dynamic assignment of SDDC Managers, making it easier to manage and maintain your VCF environment.

In the future, I would love to see this feature integrated into ESXi Kickstart as a native capability, further simplifying the host commissioning process. Additionally, the solution could be enhanced to support multiple SDDC Managers and more advanced workflows, such as automated network pool allocation and VM deployment.

Conclusion

Incorporating the VCF host commissioning workflow automatically into an ESXi Kickstart installation provides a simple yet powerful solution for automating the host provisioning process in your VCF environment. By eliminating user interaction and leveraging the power of automation, you can significantly improve the efficiency and scalability of your VCF deployment.

Microsoft OS/2 2.0 Now Available on ESXi

This is a discussion thread on Reddit about the operating system OS/2, which was popular in the 1990s but is now considered a “dead” OS. The thread was started by a user who is running OS/2 as a virtual machine (VM) on vSphere and is looking for fragments of the software. Other users share their experiences with OS/2, including its speed and stability compared to Windows, and discuss the possibility of running macOS VMs on vSphere. One user mentions that IBM licensed OS/2 to Arca Noa and it is still maintained and updated with periodic feature updates. Another user shares a link to a website with old OS/2 software, including Virtualbox 5.1. The thread also touches on the topic of “software archeology” and the challenges of working with ancient software.

Troubleshooting Tips

Troubleshooting vCLS VMs Failing to Power On in Nested ESXi Environments

As a Solutions Architect in the VMware Cloud Foundation (VCF) Division at Broadcom, I recently encountered an issue when deploying a new VCF Workload Domain using the VCF Holodeck Toolkit, which leverages Nested ESXi. Specifically, I noticed that the vSphere Cluster Services (vCLS) VMs kept failing to power on and threw the following error message: “No host is compatible with the virtual machine.” This was quite strange, especially since the vCLS VMs ran fine when the VCF Management Domain was setup.

After investigating the issue, I found that the vCLS VMs expected to have the MWAIT CPU instruction exposed, as indicated by the following message in the vmware.log file: “Feature ‘cpuid.mwait’ was 0, but must be 0x1.” This is because newer vSphere releases expect to configure Per-VM EVC, but the vCLS VM may not function properly within a Nested ESXi environment.

To resolve this issue, I worked with Ben Sier, who works on Holodeck, and he provided a workaround. Here are the steps to resolve the issue:

Step 1 – Upgrade ESXi VM Compatibility (vHW)

We first need to upgrade the ESXi VM Compatibility (vHW) of the vCLS VMs to the latest version (should be v14 if you are using VCF 5.1 or 8.0 Update 2). To do this, right click on the vCLS VM and select Upgrade VM Compatibility.

Step 2 – Enable Per-VM EVC Configuration

Log back into your vCenter Server and then enable the Per-VM EVC configuration but set it to disabled. In a few seconds, you should notice that the vCLS VM can now be successfully powered on and will enable Per-VM EVC automatically.

If you wanted to automate these steps, here are quick PowerCLI snippets that can be used:

“`powershell

# Step 1 – Upgrade ESXi VM Compatibility (vHW)

Get-VM -Name | Update-VM -VMCompatiabilityVersion 14

# Step 2 – Enable Per-VM EVC Configuration

Get-VM -Name | Set-VM -PerVM EvangelicalCompatibilityEnabled $true

“`

Conclusion

In this blog post, we discussed an issue that may arise when deploying vCLS VMs in a Nested ESXi environment, where the VMs fail to power on with the error message “No host is compatible with the virtual machine.” We also provided a workaround to resolve the issue by upgrading the ESXi VM Compatibility (vHW) and enabling the Per-VM EVC configuration. These steps can be automated using PowerCLI snippets.

I hope this blog post helps you troubleshoot any issues you may encounter when deploying vCLS VMs in Nested ESXi environments. If you have any further questions or concerns, please feel free to reach out to me.