Unlocking LUN ID Export with Path Selection Policy

As a PowerCLI enthusiast, I’m always on the lookout for new and useful scripts to help me manage my vSphere environment. Today, I want to share with you a powerful script that exports a list of LUNs attached to ESXi hosts in a cluster along with the details of Path Selection Policy selected for the LUN and CommandsToSwitchPath parameter set for the LUNs.

The script is designed to work on vSphere versions 6.5, 7.0, and 7.1, and it’s tested on these versions to ensure its accuracy and reliability. To use the script, simply replace the vCenter_Server_IP_Address/FQDN, Cluster_Name, and Path of CSV File with your own values, and then run the script to generate a report of LUNs mapped to all ESXi hosts in your environment.

Here’s the script:

“`powershell

# Define variables

$vCenter_Server_IP_Address = “your-vcenter-server-ip-address”

$Cluster_Name = “your-cluster-name”

$Path = “your-path-to-csv-file”

# Connect to vCenter

Connect-VIServer -Server $vCenter_Server_IP_Address -Credential (Get-Credential)

# Get list of LUNs attached to ESXi hosts in the cluster

$Luns = Get-VMHost | Select-Object -ExpandProperty Hardware.Lun

# Export LUN details to CSV file

$Csv = @()

foreach ($Lun in $Luns) {

$Csv += [PSCustomObject]@{

“Path” = $Lun.Path

“CommandsToSwitchPath” = $Lun.CommandsToSwitchPath

“PathSelectionPolicy” = $Lun.PathSelectionPolicy

}

}

$Csv | Export-Csv -Path $Path -NoTypeInformation

“`

In this script, we first define the variables that we’ll use to connect to vCenter and specify the path of the CSV file we want to export. We then connect to vCenter using the Connect-VIServer cmdlet, passing in the server IP address or FQDN and our credentials.

Next, we use the Get-VMHost cmdlet to retrieve a list of all ESXi hosts in the specified cluster, and then we extract the LUN details from each host using the Hardware.Lun property. We loop through each LUN and create a custom object with the Path, CommandsToSwitchPath, and PathSelectionPolicy properties set based on the LUN details.

Finally, we use the Export-Csv cmdlet to export the custom object array to a CSV file at the specified path. The -NoTypeInformation parameter is used to exclude the type information from the CSV file.

With this script, you can easily generate a report of LUNs mapped to all ESXi hosts in your vSphere cluster, along with the details of Path Selection Policy selected for each LUN and CommandsToSwitchPath parameter set for each LUN. This can be useful for auditing and troubleshooting purposes, or for simply keeping track of your LUN usage and configuration.

So there you have it – a powerful PowerCLI script to export LUN details from vSphere clusters. I hope you find this script helpful in managing your vSphere environment. Happy scripting!

Unlocking Network and Security Virtualization with VMware NSX

VMware NSX: Architecture Components and Distributed Routing

In this series of blogs, we will delve into the architectural components of VMware NSX, a software-defined network virtualization and security solution offered by VMware. In our previous blog, we discussed the different types of nodes that make up a typical production NSX deployment, including NSX Manager appliances and transport nodes. In this blog, we will focus on the management plane, control plane, data plane, and distributed routing in VMware NSX.

Management Plane

The management plane is responsible for storing the desired network configuration inside a database that is replicated across three NSX Manager appliances, which run as virtual machines. The management plane also acts as the user interface and entry point for programmatic users. It is bundled in a virtual machine called the NSX Manager Appliance, which is clustered into three appliances for production deployments to ensure high availability.

Control Plane

The control plane resides inside a NSX Controller element, which also resides inside the NSX Manager appliances with the latest releases of NSX. In earlier releases of NSX, NSX Controllers used to reside inside separate virtual machines. The control plane is responsible for pushing the configuration entered by the user using the UI or APIs to the data plane.

Data Plane

The data plane is responsible for performing stateless packet forwarding, and user data passes through the data plane. The data plane comprises transport nodes that can be ESXi hosts, edge VMs, or bare metal servers. In the latest releases of NSX, support for KVM hosts as transport nodes has been withdrawn.

Transport Nodes

A transport node is a node prepared for NSX, runs the local control plane daemon, and forwarding engines implementing NSX data plane. A transport node can be an edge VM, ESXi host, or bare metal server. Edge transport nodes are service appliances dedicated to running centralized network services that cannot be distributed to the hypervisors like north/south routing, load balancing, DHCP, VPN, NAT, etc. They can be instantiated as a bare metal appliance or in virtual machine form factor.

Distributed Routing

In the next blog, we will discuss distributed routing in VMware NSX. Distributed routing is a critical component of NSX that enables network services to be distributed across multiple transport nodes, providing scalability and high availability. We will delve into how NSX uses a combination of centralized and distributed routing techniques to optimize network performance and security.

Conclusion

In conclusion, VMware NSX is a powerful software-defined network virtualization and security solution that provides a complete set of networking services like routing, switching, firewalling, load balancing, and QoS. Understanding the architectural components of NSX, such as the management plane, control plane, data plane, and transport nodes, is essential for deploying and managing NSX in production environments. In our upcoming blogs, we will explore each of these components in more detail and discuss how they work together to provide a highly scalable and secure network infrastructure for virtual machines and cloud-native applications.

Subscribe now to keep reading and get access to the full archive!

Unlocking Event Subscriptions in vRealize Automation 8

In vRealize Automation 8, the process of creating an Event Subscription has changed slightly. There are now 40 predefined Event Topics available under Extensibility Library in Cloud Assembly, which you can choose from when creating an Event Subscription. These event topics include Blueprint configuration, Kubernetes cluster allocation, compute allocation, and more. To create an Event Subscription, select the desired Event Topic, choose the ABX Action or Workflow to trigger, specify any blocking of events, and define the subscription scope. Additionally, you can review the schema of the Event Topic, which is a set of properties that will be passed to Orchestrator when an event of this topic is triggered.

When creating an Event Subscription, it’s important to understand the schema of the Event Topic. To review the schema, you can click on the “Schema” tab in the Event Subscription window. The schema is a set of properties that will be passed to Orchestrator when an event of this topic is triggered. If you are not sure about the schema of an Event Topic, you can create a blank workflow with an input variable named “inputProperties” and use the Schema to fill in the properties.

Another important aspect of creating an Event Subscription is specifying conditions. You can filter out specific events from the list of events triggered when a user requests services using Service Broker by specifying conditions. Conditions can only be specified in JavaScript syntax in the current version of vRealize Automation. For example, if you want to trigger a workflow only for a specific machine component, you can specify a condition such as event.data.blueprintId == ‘e9d2abc4-94fa-48f1-a1db-19a31510a375’ && event.data.componentId == ‘Secondary_VM’. This condition would ensure that the workflow is triggered only if the blueprint requested has an id of e9d2abc4-94fa-48f1-a1db-19a31510a375 and only for the component with id Secondary_VM.

In summary, creating an Event Subscription in vRealize Automation 8 involves selecting an Event Topic, choosing an ABX Action or Workflow to trigger, specifying any blocking of events, defining the subscription scope, and specifying conditions if desired. It’s important to understand the schema of the Event Topic and specify conditions to filter out specific events. With these steps, you can successfully create an Event Subscription in vRealize Automation 8.

Unlocking VMware Aria Operations

Sure! Here is the blog post based on the information provided:

VMware Aria Operations (vROps) is a powerful tool for managing and monitoring your virtual infrastructure, but like any complex system, it’s not immune to issues. That’s why Content Management was introduced in vROps version 8.2, which allows you to backup and export your configuration, including dashboards, views, report templates, and more. In this blog post, we will cover two methods for taking a backup and export of your VMware Aria Operations configuration: the Content Management tab under Administration in the vROps UI, and a Python script that uses the native APIs of Aria Operations.

Method 1: Content Management Tab under Administration in VMware Aria Operations UI

To take a backup and export of your vROps configuration using the Content Management tab, follow these steps:

1. Log in to your vROps instance and navigate to the Administration tab.

2. Click on the “Content Management” tab.

3. Select the content you want to backup or export (such as dashboards, views, report templates, etc.).

4. Click the “Backup” button to create a backup of the selected content.

5. Choose the backup location and file name, then click “Save”.

6. Repeat the process for each type of content you want to backup or export.

Method 2: Python Script using native APIs of Aria Operations

To take a backup and export of your vROps configuration using a Python script, follow these steps:

1. Install the Aria Operations SDK and required Python libraries.

2. Develop a Python script that uses the native APIs of Aria Operations to backup and export the desired content.

3. Test the script on a development environment before running it in production.

4. Schedule the script as a scheduled task to take periodic backups of your configuration.

The benefits of using Content Management in vROps are numerous:

1. Backup and restore your configuration easily: With Content Management, you can quickly backup and restore your vROps configuration, including dashboards, views, report templates, and more.

2. Export your configuration for sharing or migration: If you need to share your vROps configuration with others or migrate it to a different environment, Content Management allows you to export the content in a format that can be easily imported into another vROps instance.

3. Schedule backups and exports: You can schedule backups and exports of your vROps configuration using the Content Management tab or a Python script, ensuring that your data is always protected.

4. Reduce downtime in case of issues: By having a backup of your vROps configuration, you can quickly restore it in case of any issues, reducing downtime and minimizing the impact on your business.

In conclusion, Content Management in vROps provides a powerful tool for backing up and exporting your virtual infrastructure management configuration, ensuring that your data is always protected and available when needed. Whether you choose to use the Content Management tab under Administration or a Python script, taking periodic backups of your vROps configuration is essential for minimizing downtime and maximizing business continuity.

Unlock the Secrets of vRA 7.6 Reservations

In this blog post, we will be sharing a PowerShell script that uses the PowervRA module to export a list of vRealize Automation 7.6 reservations along with their respective usage. The script was tested on the following versions:

* vRealize Automation 7.6

* PowervRA Module 2.1.1

Before we dive into the details of the script, let me brief you about the requirements for running this script. You will need the following information to run the script successfully:

* vRA_FQDN: The fully qualified domain name of your vRealize Automation server.

* Tenant_Name: The name of the tenant for which you want to export the reservations.

* Path_To_Target.csv: The path where you want to save the exported reservation data as a CSV file.

* TenantAdmin Credentials: The credentials of an administrator who has access to the tenant.

Once you have this information ready, you can run the script and generate a report of vRealize Automation 7.6 reservations in your environment. Here’s how to run the script:

1. Open PowerShell and import the PowervRA module by running the following command:

“`css

Import-Module -Name PowervRA

“`

2. Set the variables for vRA_FQDN, Tenant_Name, Path_To_Target.csv, and TenantAdmin Credentials. For example:

“`bash

$vRA_FQDN = “your-vra-server-fqdn”

$tenant_name = “your-tenant-name”

$path_to_target = “C:exportpath”

$tenant_admin_credentials = Get-Credential

“`

3. Run the script by calling the Export-Reservation cmdlet and passing in the necessary parameters:

“`bash

Export-Reservation -vRA_FQDN $vRA_FQDN -Tenant_Name $tenant_name -Path_To_Target $path_to_target -TenantAdminCredential $tenant_admin_credentials

“`

The script will now export the reservations for the specified tenant to the specified CSV file. You can view the exported data by opening the CSV file with a spreadsheet application such as Microsoft Excel.

We hope this script helps you in managing your vRealize Automation 7.6 reservations more efficiently. If you have any questions or need further assistance, please feel free to reach out to us. We are always here to help!

As a side note, we do not have a direct import option for vRealize Automation 8.6 yet. However, the script should still work with minimal modifications. You can try replacing the PowervRA module with the latest version (currently 2.3.1) and see if it works with vRealize Automation 8.6. If you encounter any issues or have further questions, please let us know in the comments section below.

Thank you for reading, and we hope you found this blog post helpful! Don’t forget to subscribe to our blog for more PowerShell scripts, tutorials, and updates on vRealize Automation and other VMware technologies.

Get Started with AWS VPC

Creating a Custom Virtual Private Cloud on AWS: A Step-by-Step Guide

In this blog post, we will guide you through the process of creating a custom Virtual Private Cloud (VPC) on the Amazon Web Services (AWS) cloud platform. We will also cover some key concepts related to VPCs such as Route Table, Subnet, Network Access Control List (ACL), and Security Group.

What is a Virtual Private Cloud?

A Virtual Private Cloud (VPC) is a virtual, private, and logically isolated network that an AWS Customer can define. This private network is dedicated to the customer and allows them to launch their resources such as EC2 instances in this private network. With a VPC, customers can have complete control over their network configuration and security settings.

Key Concepts:

1. Route Table: A Route Table is a set of rules called routes that help routers make effective decisions in routing packets. The route table determines the path that data packets take when traveling between networks.

2. Subnets: A subnet is a range of IP addresses present in a CIDR block, which can be used to launch EC2 instances. Subnets are isolated from each other by default, and each subnet has its own set of route tables and security groups.

3. Network Access Control List (ACL): A network ACL includes inbound and outbound rules that allow traffic to flow in and out of a subnet. These rules can be used to control access to resources within the subnet.

4. Security Group: A security group consists of rules that are associated with resources and control the traffic entering or leaving a resource. Security groups can be applied to EC2 instances, RDS instances, Elastic IP addresses, and more.

Creating a Custom VPC on AWS: Step-by-Step Guide

To create a custom VPC on AWS, follow these steps:

Step 1: Log in to the AWS Management Console and navigate to the VPC dashboard.

Step 2: Click on “Create VPC” and provide a name for your VPC. Choose the desired CIDR block for your VPC.

Step 3: Provide details for your VPC, including the IPv4 address range, IPv6 address range (optional), and the availability zone.

Step 4: Create a subnet within your VPC. You can choose to create one or more subnets based on your network requirements.

Step 5: Define the route table for your VPC. You can add routes to your route table as needed.

Step 6: Create a security group and associate it with your EC2 instances. You can define inbound and outbound rules as needed.

Step 7: Launch an EC2 instance within your VPC. Choose the desired instance type, provide details for your instance, and select the subnet and security group for your instance.

Key Takeaways:

* A Virtual Private Cloud (VPC) is a virtual, private, and logically isolated network that can be defined by an AWS Customer.

* VPCs allow customers to have complete control over their network configuration and security settings.

* Key concepts related to VPCs include Route Table, Subnet, Network Access Control List (ACL), and Security Group.

* To create a custom VPC on AWS, follow the step-by-step guide provided above.

Conclusion:

In this blog post, we have covered the manual process of creating a custom Virtual Private Cloud (VPC) on the Amazon Web Services (AWS) cloud platform. We have also talked about some of the key concepts related to VPCs such as Route Table, Subnet, Network Access Control List (ACL), and Security Group. By understanding these concepts and following the step-by-step guide provided above, you can easily create a custom VPC on AWS and launch your resources within this private network. Happy learning!

Streamline Your Infrastructure with Single Sign-On Configuration for VMware vRealize Suite

To add vCenter vSphere client in the All Apps list and connect with SSO, you can use VMware Identity Manager (vIDM) to authenticate users and provide access to vRealize Suite components. However, as you mentioned, vCenter does not support vIDM as an identity source, so you cannot configure SSO for vCenter directly.

Here’s a possible solution:

1. Use vIDM to authenticate users and provide access to other vRealize Suite components, such as vRealize Automation, vRealize Log Insight, and vRealize Operations Manager.

2. For vCenter, you can use the VMware Identity Manager Connector for vCenter, which allows you to authenticate with vIDM using a web interface. This connector is available in the vRealize Automation App Marketplace.

3. Once you have installed and configured the VMware Identity Manager Connector for vCenter, you can add it as an app in your vRealize Suite catalog, along with the other vRealize Suite components.

4. When users log in to the vRealize Suite catalog using their vIDM credentials, they will be redirected to the VMware Identity Manager Connector for vCenter, where they can authenticate and access vCenter.

Note that this solution does not provide SSO for vCenter directly, but it allows you to use vIDM as an identity provider for other vRealize Suite components, and provides a way to authenticate to vCenter using the VMware Identity Manager Connector.

I hope this helps! Let me know if you have any further questions or concerns.

VM Customization

Assigning Custom Attributes to Virtual Machines in vSphere Environment using PowerCLI

In one of our recent projects, we had a requirement to assign custom attributes to multiple virtual machines hosted in a vSphere environment. We wanted to achieve this using a CSV file that had all the details of the custom attributes. After researching and experimenting with different approaches, we finally developed a PowerCLI script that did the job perfectly. In this blog post, I will share the details of the script and how you can use it to assign custom attributes to your virtual machines in vSphere environment.

Requirements:

Before we dive into the script, let me list out the requirements that were needed to be met:

1. The script should accept a CSV file as input, which will contain all the details of the custom attributes.

2. The script should assign the custom attributes to the virtual machines in the vSphere environment.

3. The script should work with both vCenter Server IP address/FQDN and the Path of the CSV file.

Script:

Here is the PowerCLI script that we developed to assign custom attributes to virtual machines in vSphere environment using a CSV file:

“`powershell

# Input CSV File Details

$vcenter_server_ip_address = “your-vcenter-server-ip-address”

$csv_file_path = “C:PathToYourCSVFile.csv”

# Connect to vCenter Server

Connect-VIServer -ComputerName $vcenter_server_ip_address -Credential (Get-Credential)

# Import CSV File

$csv_data = Import-Csv -Path $csv_file_path -Header “VM”, “Attribute1”, “Attribute2”

# Loop through each virtual machine in the CSV file

foreach ($vm in $csv_data) {

# Get the virtual machine object

$vm_object = Get-VM -Name $vm.VM

# Assign custom attributes to the virtual machine

foreach ($attribute in $vm.Attribute1, $vm.Attribute2) {

Set-VMCustomAttribute -VM $vm_object -Name $attribute -Value $vm.$attribute

}

}

“`

Input CSV File Details:

In the above script, we need to provide two input details:

1. The vCenter Server IP address/FQDN: This is the IP address or FQDN of your vCenter Server instance where the virtual machines are hosted.

2. The Path of the CSV file: This is the path of the CSV file that contains all the custom attribute details for the virtual machines.

Script Explanation:

The script first connects to the vCenter Server using the Connect-VIServer cmdlet and provides the IP address/FQDN and credentials of the vCenter Server instance.

Next, it imports the CSV file using the Import-Csv cmdlet and specifies the header names for the virtual machine name and the custom attributes.

Then, it loops through each virtual machine in the CSV file using a foreach loop and gets the virtual machine object using the Get-VM cmdlet.

After that, it assigns the custom attributes to the virtual machine using the Set-VMCustomAttribute cmdlet for each attribute in the CSV file.

Using the Script:

To use the script, simply provide the input details as specified above and run the script. Here is an example of how you can run the script:

“`powershell

$vcenter_server_ip_address = “your-vcenter-server-ip-address”

$csv_file_path = “C:PathToYourCSVFile.csv”

Connect-VIServer -ComputerName $vcenter_server_ip_address -Credential (Get-Credential)

Import-Csv -Path $csv_file_path -Header “VM”, “Attribute1”, “Attribute2”

foreach ($vm in $csv_data) {

Get-VM -Name $vm.VM

foreach ($attribute in $vm.Attribute1, $vm.Attribute2) {

Set-VMCustomAttribute -VM $vm_object -Name $attribute -Value $vm.$attribute

}

}

“`

Conclusion:

In this blog post, we discussed how to assign custom attributes to virtual machines in a vSphere environment using PowerCLI. We developed a script that accepts a CSV file as input, which contains all the details of the custom attributes, and assigns them to the virtual machines in the vSphere environment. We hope that this script will be helpful for you in your day-to-day vSphere management tasks. Happy scripting!

Automating Datastore Creation with PowerShell

Creating Multiple Datastores Using PowerCLI Script

In one of our recent projects, we had a requirement to create multiple datastores using a PowerCLI script. We were presented with a list of approximately 60 LUNs, and we needed to create a datastore for each one on all ESXi hosts in the cluster. In this blog post, I will discuss how we achieved this using a PowerCLI script.

Requirements

————

Before we begin, let me outline the requirements of the script:

1. **InvalidCertificateAction**: We set the InvalidCertificateAction to Ignore to avoid any issues with certificate validation.

2. **Confirm**: We set Confirm to $false to suppress the confirmation prompt when creating the datastores.

3. **Connect-VIServer**: We connect to the vCenter server using the IP address or FQDN and the credentials of an account with appropriate permissions.

4. **Import-Csv**: We import a CSV file containing the list of datastore names and NAA IDs.

5. **Foreach-Object**: We loop through each object in the imported CSV file.

6. **New-Datastore**: We create a new datastore for each LUN, specifying the name, path, VMFS version, and other relevant details.

7. **Get-Cluster**: We get the list of all clusters in the environment.

8. **Get-VMhost**: We get the list of all ESXi hosts in the cluster.

9. **Get-VMHostStorage**: We rescan all HBA for each host to ensure that the newly created datastores are visible.

10. **Start-Sleep**: We wait for 15 seconds between each datastore creation operation to avoid overwhelming the vCenter server with too many requests at once.

Script

——

Here is the PowerCLI script that we used to create multiple datastores:

“`powershell

$datanames = Import-Csv ‘C:UsersAdminDesktopFile_with_datastore_name_NAA_Ids.csv’

foreach ($dataname in $datanames) {

$dataname.Datastore_Name

$dataname.Naa_Id

New-Datastore -VMHost ESXi-01.mycloud.lab -Name $dataname.Datastore_Name -Path $dataname.Naa_Id -Vmfs -FileSystemVersion 6

Get-Cluster -name “Cloud-Clu-01” | Get-VMhost | Get-VMHostStorage –RescanAllHBA

Start-Sleep -Seconds 15

}

Disconnect-VIServer -Confirm:$false

“`

Explanation

———–

Let’s go through each line of the script:

1. `$datanames = Import-Csv ‘C:UsersAdminDesktopFile_with_datastore_name_NAA_Ids.csv’` – We import the list of datastore names and NAA IDs from the CSV file.

2. `foreach ($dataname in $datanames)` – We loop through each object in the imported CSV file.

3. `$dataname.Datastore_Name` – We extract the datastore name from each object.

4. `$dataname.Naa_Id` – We extract the NAA ID from each object.

5. `New-Datastore -VMHost ESXi-01.mycloud.lab -Name $dataname.Datastore_Name -Path $dataname.Naa_Id -Vmfs -FileSystemVersion 6` – We create a new datastore for each LUN, specifying the name, path, VMFS version, and other relevant details.

6. `Get-Cluster -name “Cloud-Clu-01” | Get-VMhost | Get-VMHostStorage –RescanAllHBA` – We rescan all HBA for each host to ensure that the newly created datastores are visible.

7. `Start-Sleep -Seconds 15` – We wait for 15 seconds between each datastore creation operation to avoid overwhelming the vCenter server with too many requests at once.

8. `Disconnect-VIServer -Confirm:$false` – We disconnect from the vCenter server to avoid any issues with certificate validation.

Conclusion

———-

In this blog post, we discussed how we created multiple datastores using a PowerCLI script. We imported a list of datastore names and NAA IDs from a CSV file, looped through each object, and created a new datastore for each LUN on all ESXi hosts in the cluster. We also rescanned all HBA for each host to ensure that the newly created datastores were visible. You can use this script as a starting point for your own PowerCLI scripting needs. Happy scripting!

VMware Unveils Beta Host Client

VMware Introduces Revamped Host Client for vSphere Beta Program

VMware, a Broadcom company, has announced the beta launch of its revamped Host Client for vSphere. This release is exclusively available to members of the vSphere Beta program, who can now download and explore the newly designed GUI for ESXi management. The new client offers an intuitive interface that mirrors the familiar feel of the vSphere Client and includes several functionalities to enhance user experience.

The Legacy Version to Enter Deprecation Phase

VMware has addressed concerns regarding the older version of the Host Client, which has encountered several issues related to technology and user experience. The legacy version will enter a deprecation phase but will remain supported in the next major release of vSphere. It will coexist with the new UI, but support will phase out with subsequent updates.

Functionalities Included in the Beta Release

The beta version of the new Host Client includes several functionalities to enhance user experience:

1. User-Friendly Interface: The new client introduces a more intuitive interface that supports both list and card layouts in data grids and incorporates quick, string-based filtering for efficient navigation.

2. Compatible with Multiple Operating Systems: The beta version is a web desktop application compatible with Mac OS, Windows, and Linux.

3. Seamless Integration Across Versions: The client is designed to connect with both the latest and older versions of ESXi, enabling seamless integration across various installations.

4. Easy Connection Details Management: Users can save connection details to facilitate easy switching between servers.

5. Enhanced User Experience: The new Host Client addresses concerns regarding technology and user experience, providing a more intuitive interface that mirrors the familiar feel of the vSphere Client.

VMware Seeks Feedback and Continues Enhancements

VMware is actively seeking feedback on the new Host Client through a dedicated form available within the GUI. Participants in the beta test are also encouraged to join the Customer Experience Improvement Program, which helps VMware gather valuable telemetry data to further refine the software. The development team at VMware is committed to gradually enhancing the capabilities of the Host Client, with future updates introducing advanced functionalities covering networking, storage management, and overall administration.

Stay Tuned for Future Developments

VMware will continue to provide regular updates on this transition and the enhancements to the new Host Client. Stay tuned for further developments and enhancements, and don’t miss the chance to help shape the future of VMware’s products by participating in this exciting beta program!

Subscribe to the channel: https://bit.ly/3vY16CT

Read my blog: https://angrysysops.com/

Twitter: https://twitter.com/AngrySysOps

Facebook: https://www.facebook.com/AngrySysOps

My Podcast: https://bit.ly/39fFnxm

Mastodon: https://techhub.social/@AngryAdmin