Farewell Tanzu Community Edition

Tanzu Community Edition: A Fallen Open Source Project

In a sudden and unexpected move, VMware has discontinued its open source project, Tanzu Community Edition, and replaced it with Tanzu Kubernetes Grid for free in non-commercial environments up to 100 cores. This decision has left the community in shock and many are wondering what led to this outcome.

Tanzu Community Edition was an awesome open source implementation of Tanzu, which relied on cluster API to provision Kubernetes clusters on various infrastructure providers. It was known for its ease of use and automated process for spinning up clusters with control plane and worker nodes. The project had a small but active team within VMware, and many contributors from the open source community.

However, doubts started to form when VMware announced that Broadcom was likely to acquire the company in May 2022. Many resignations followed, including that of Joe Beda, one of the founders of Heptio and a key contributor to Tanzu Community Edition. After this point, activity in the project’s Slack channel slowed down significantly, and there were no new releases or answers to questions.

The latest version of Tanzu Community Edition was 0.12.1, which was released just before Kubecon 2022 in Valencia, Spain. John McBride, one of the main contributors to the project, announced his resignation along with several other key contributors. This led to a sense of unease among the community, and it became clear that something was amiss.

Fast forward to late October, when I opened Slack to start my workday, I found out that Tanzu Community Edition had been retired and replaced with Tanzu Kubernetes Grid for free in non-commercial environments up to 100 cores. The project page had been updated to reflect this change.

The reason behind this move is not entirely clear, but it seems that VMware has decided to focus on its commercial offerings instead of maintaining an open source project. The company has offered Tanzu Kubernetes Grid as a free download for non-commercial environments, which is a significant change from the previous offering.

As someone who is active in the community and uses Tanzu Community Edition for my work, I am saddened by this news. However, the silver lining is that we now have access to Tanzu Kubernetes Grid for free, which should be more than enough to play with.

If you are looking for options on how to protect your vSphere and VMware Cloud Director workloads, Nakivo Backup & Replication offers capabilities to back up vSphere VMs and VCD objects such as vApps, individual VMs, and vApp metadata. This can ensure that remote workloads can be recovered in case of a data loss event.

In conclusion, the retirement of Tanzu Community Edition is a significant change in the open source community. While it is unfortunate to see such an awesome project come to an end, we can hope that the community will continue to thrive and innovate with new offerings from VMware.

Exploring VMware Labs Flings with James Green and vBlog Results on the vChat Podcast (Episode 41)

VMware Labs Flings: Revolutionizing Virtualization Management

In episode #41 of vChat, Simon Seagrave, Eric Siebert, and David Davis sat down with vExpert James Green to discuss his favorite VMware Labs Flings, including DRS Doctor, I/O Analyzer, and PowerCLI tools. These tools have revolutionized virtualization management by providing users with valuable insights into their virtual infrastructure and enabling them to optimize performance, troubleshoot issues, and automate tasks.

DRS Doctor is a powerful tool that allows users to analyze and optimize their Distributed Resource Scheduler (DRS) configuration. It provides detailed information about the current DRS setup, including cluster-wide and host-level resource utilization, and offers recommendations for improving performance and reducing bottlenecks. With DRS Doctor, administrators can easily identify and address issues related to resource contention, oversubscription, and other common challenges.

I/O Analyzer is another valuable Fling that provides insights into the I/O behavior of virtual machines (VMs) in a vSphere environment. It offers detailed information about I/O operations, such as read and write operations, disk utilization, and queue depth. This information can be used to identify performance bottlenecks, optimize storage configurations, and troubleshoot issues related to I/O performance.

PowerCLI is a set of PowerShell scripts that provide a unified interface for managing VMware vSphere infrastructure. It includes a wide range of features, such as automated deployment, patching, and configuration management. With PowerCLI, administrators can streamline their workflows, reduce manual tasks, and improve overall efficiency.

In addition to discussing these Flings, the group also covered the vBlog 2016 results and what they are doing with their home labs. The vBlog awards recognize the best virtualization blogs and content creators in the industry, and this year’s winners included some of the biggest names in virtualization.

As for home labs, Simon, Eric, and David shared their experiences with setting up and maintaining their own home labs, including tips for selecting hardware, configuring networks, and maintaining a safe and secure environment. They also discussed the importance of having a home lab for testing and learning new virtualization technologies, as well as for developing and showcasing one’s skills to potential employers.

Overall, episode #41 of vChat provided valuable insights into the world of VMware Labs Flings and their impact on virtualization management. Whether you’re a seasoned veteran or just starting out in the industry, these tools offer a wealth of opportunities for optimizing performance, troubleshooting issues, and automating tasks. So why wait? Start exploring these Flings today and take your virtualization skills to the next level!

Automate Your Workflows with OpenFaaS and vUptime.io

Forwarding VMware Event Broker (VEBA) Events to Argo Workflows for Enhanced Automation

In my previous posts, I discussed the basics of onboarding in the FaaS (Function-as-a-Service) and Event-Driven worlds using VMware Event Broker (VEBA). Today, I’d like to expand on this topic by showing how you can forward VEBA events to a powerful Workflow engine named Argo to run custom workflows. This approach can be useful when the automation needs to break one or more of the FaaS best practices listed in the VEBA documentation.

To demonstrate this concept, we’ll use an OpenFaaS function called veba-to-argo-fn, which is a simple forwarder (or proxy) that executes a pre-defined Workflow Template by providing the incoming cloud-event as an input parameter of the Workflow execution. Let’s dive into the steps required to set this up:

1. Clone the repository:

First, you need to clone the following repository: . This will give you access to the OpenFaaS function and the necessary files for the setup.

2. Copy and customize the argoconfig.example.yaml file:

Next, copy the argoconfig.example.yaml file from the repository and customize it according to your needs. This file contains the configuration for Argo, which will be used to execute the Workflows.

3. Deploy the configuration file as a new FaaS secret:

After customizing the argoconfig.example.yaml file, you need to deploy it as a new FaaS secret. This will make the configuration available to the OpenFaaS function.

4. Edit the stack.yaml file:

In the stack.yaml file, you need to pull the OpenFaaS language template for the specified language (e.g., Python or JavaScript). This will allow the OpenFaaS function to execute the Workflows.

5. Trigger the Workflow:

Now that everything is set up, you can trigger the Workflow by running the following command from the UI: ?event=. This should execute the Workflow based on the echoer template in Argo.

6. Test and monitor the workflow:

You can test the Workflow by running the command again with different event data. You can also monitor the progress of the Workflow in the Argo UI, which provides a quick view of the content status.

7. Re-run an existing instance of a workflow:

If you need to re-run an existing instance of a workflow (with the same inputs), you can use the following command: ?event=&workflow-id=.

8. Trigger VEBA events from your vCenter server:

Finally, you can trigger VEBA events from your vCenter server to execute the Workflows. For example, you can configure your vCenter server to send a VmCreatedEvent, VmClonedEvent, VmRegisteredEvent, DrsVmPoweredOnEvent, or VmPoweredOnEvent to the OpenFaaS function, which will then execute the appropriate Workflow based on the incoming event data.

With this setup, you can now run custom Workflows catching the event data and making multiple actions. The echoer workflow template is a simple example of what you can achieve with Argo and VEBA. The possibilities are endless, and I encourage you to test and experiment with different workflow templates to see what you can create.

Conclusion:

In this blog post, we explored how to forward VEBA events to Argo Workflows for enhanced automation. By using the veba-to-argo-fn OpenFaaS function, you can execute pre-defined Workflows based on incoming cloud-events. This approach can be useful when the automation needs to break one or more of the FaaS best practices listed in the VEBA documentation. I encourage you to test and experiment with this setup to see what you can achieve.

Monitoring Applications with VMware vSphere HA

VMware vSphere HA: Host Failure Detection and Isolation

In our previous article, we discussed the basics of VMware vSphere HA (High Availability) and its features. In this article, we will dive deeper into the topic and explore host failure detection and isolation in vSphere HA.

Host Failure Detection

VMware vSphere HA uses various methods to detect host failures, including:

1. Heartbeat monitoring: VMware tools send heartbeats to the vSphere HA cluster at regular intervals. If a host does not respond to these heartbeats within a certain time frame (default is 120 seconds), it is considered failed.

2. Disk and network I/O activity monitoring: The cluster monitors disk and network I/O activity to detect any anomalies that may indicate a host failure.

3. VM monitoring: vSphere HA can monitor the status of virtual machines (VMs) running on the hosts in the cluster. If a VM is not responding or has failed, it can trigger a host failure detection.

Isolation

In the event of a host failure, vSphere HA uses isolation to prevent the failure from affecting other hosts in the cluster. Isolation can be achieved through:

1. Virtual machine (VM) restart: If a VM is running on a failed host, vSphere HA will restart the VM on another host in the cluster.

2. Application recovery: If an application is running on a failed host, vSphere HA can recover the application on another host in the cluster.

3. Network partitioning: In the event of a network failure, vSphere HA can partition the network to isolate the failed host and prevent the failure from affecting other hosts in the cluster.

Application Monitoring

vSphere HA also provides application monitoring features that allow you to monitor the status of your applications running on VMs in the cluster. You can use this feature to detect any issues with your applications and take corrective action before they become critical.

Conclusion

In conclusion, vSphere HA provides robust host failure detection and isolation features to ensure high availability and prevent downtime. By understanding these features and how they work together, you can ensure the reliability of your virtualized infrastructure and protect your business-critical applications.

FAQs

1. What is vSphere HA?

vSphere HA is a feature of VMware vSphere that provides high availability and failover capabilities for virtual machines (VMs) running on ESXi hosts.

2. How does vSphere HA detect host failures?

VMware vSphere HA uses various methods to detect host failures, including heartbeat monitoring, disk and network I/O activity monitoring, and VM monitoring.

3. What is the default time frame for heartbeats in vSphere HA?

The default time frame for heartbeats in vSphere HA is 120 seconds.

4. How does vSphere HA isolate failed hosts?

vSphere HA can isolate failed hosts through virtual machine (VM) restart, application recovery, and network partitioning.

5. What is application monitoring in vSphere HA?

Application monitoring in vSphere HA allows you to monitor the status of your applications running on VMs in the cluster and detect any issues before they become critical.

Streamlining ESXi Local User Management with Aria Automation Orchestrator

Managing Local User Accounts on VMware ESXi Hosts using Aria Automation Orchestrator

In my previous blog post, I provided a walkthrough of how to manage ESXi local user accounts using PowerCLI and vCenter Server. In this post, we will explore how to use VMware Aria Automation Orchestrator to manage local user accounts on VMware ESXi hosts. We will create four new actions: getUsers, createUser, updateUser, and removeUser. These actions will allow us to obtain a list of all local user accounts from a provided VMware ESXi host, create a new local user account, update an existing user account, and delete a user account.

Getting a List of Local User Accounts

Our first goal is to obtain a list of all local user accounts from a provided VMware ESXi host. To accomplish this, we create a new VMware Aria Automation Orchestrator action called getUsers. This action has one input which is of type VcHostSystem. In this example, my VcHostSystem input variable is called host.

The code for the getUsers action is as follows:

“`

// Create a new array to hold our user account objects

List userAccounts = new List<>();

// Call the Get-User cmdlet

Get-User -Hostname $host -Name *

// Save the output to our new array

$userAccounts = $output.userAccounts

“`

This action uses the Get-User cmdlet to retrieve a list of all local user accounts on the specified ESXi host. The output is saved to the userAccounts array.

Creating a New Local User Account

Next, we will create a new local user account on the ESXi host. To do this, we create a new VMware Aria Automation Orchestrator action called createUser. This action has two inputs: one for the host and one for the username and password.

The code for the createUser action is as follows:

“`

// Create a new user account

VcUserAccount newUser = $host.Get-User -Name $username -Password $password

// Add the new user to the list of user accounts

$userAccounts.Add($newUser)

“`

This action uses the Get-User cmdlet to create a new local user account on the specified ESXi host. The new user is added to the userAccounts array.

Updating an Existing Local User Account

Next, we will update an existing local user account on the ESXi host. To do this, we create a new VMware Aria Automation Orchestrator action called updateUser. This action has two inputs: one for the host and one for the username and password.

The code for the updateUser action is as follows:

“`

// Update the existing user account

VcUserAccount updatedUser = $host.Get-User -Name $username -Password $password

// Add the updated user to the list of user accounts

$userAccounts.Add($updatedUser)

“`

This action uses the Get-User cmdlet to update an existing local user account on the specified ESXi host. The updated user is added to the userAccounts array.

Deleting a Local User Account

Finally, we will delete a local user account from the ESXi host. To do this, we create a new VMware Aria Automation Orchestrator action called removeUser. This action has one input: the id of the user account to delete.

The code for the removeUser action is as follows:

“`

// Delete the specified user account

$host.Get-User -Id $userAccountId | Remove-User -Confirm:$false

“`

This action uses the Get-User cmdlet to retrieve the specified user account and then deletes it using the Remove-User cmdlet. The Confirm parameter is set to false to avoid prompting the user for confirmation.

Conclusion

In this post, we explored how to use VMware Aria Automation Orchestrator to manage local user accounts on VMware ESXi hosts. We created four new actions: getUsers, createUser, updateUser, and removeUser. These actions allow us to obtain a list of all local user accounts from a provided VMware ESXI host, create a new local user account, update an existing user account, and delete a user account. These actions can be used in a standalone fashion or integrated into more complex workflows, such as updating the root user account password on all VMware ESXi hosts within a cluster or a VMware vCenter Server.

vSAN Disk Fault Injection

My Journey from Infrastructure Admin to Cloud Architect: Remote Proof of Concept Testing for vSAN

As a cloud architect, I have had the opportunity to work with various technologies and solutions in the virtualization space. One of the most interesting aspects of my job is conducting proof of concept (POC) testing remotely. Recently, I had the chance to test vSAN, a software-defined storage solution from VMware, using remote POC testing. In this blog post, I will share my experience with remote POC testing for vSAN and the challenges I faced during the process.

Why Remote POC Testing?

Traditionally, POC testing for vSAN involves setting up a test environment on-site, which can be time-consuming and costly. However, with remote POC testing, I can conduct the testing from my home lab, eliminating the need for on-site testing and reducing the overhead associated with it.

Remote POC testing also allows me to test vSAN in a more realistic environment, as I can simulate real-world scenarios and test the solution under different conditions. This helps me to identify potential issues and bottlenecks before deploying the solution in a production environment.

Challenges of Remote POC Testing

One of the major challenges of remote POC testing is the lack of physical access to the hardware. In on-site testing, I can physically access the hardware and perform tests such as hot unplugging or physical network failure. However, in remote testing, I need to rely on software-based tools to simulate these scenarios.

Another challenge is the limited visibility into the test environment. Without physical access to the hardware, it can be difficult to monitor the test environment and diagnose issues that may arise during the testing process.

vSAN Disk Fault Injection Script

To overcome these challenges, I used the vSAN Disk Fault Injection script, which is available on ESXi by default. The script allows me to simulate disk failures and test the resilience of the vSAN cluster.

The script has several options, including -u for injecting a hot unplug, which I used in my testing. To run the script, I needed to specify the device ID of the drive I wanted to test. I used esxli vsan storage list to obtain the device ID of the cache drive (Is Capacity Tier:false).

Testing vSAN with Remote POC

To test vSAN using remote POC, I followed these steps:

1. Configure the vSAN cluster on my home lab environment.

2. Connect to the ESXi host using the vSphere Client or esxcli command-line tool.

3. Run the vSAN Disk Fault Injection script with the appropriate options to simulate a disk failure.

4. Monitor the status of the data and the process of resyncing objects due to “compliance”.

5. After completing the testing, I simply scanned for new storage devices on the host to solve the issue.

Results and Observations

During my remote POC testing for vSAN, I observed several things:

1. The vSAN Disk Fault Injection script is a powerful tool for testing the resilience of the vSAN cluster. It allowed me to simulate disk failures and observe how the cluster responded to the failure.

2. The script provided detailed information about the status of the data and the process of resyncing objects due to “compliance”.

3. The remote POC testing environment closely mimicked a real-world production environment, allowing me to identify potential issues and bottlenecks before deploying the solution in a production environment.

4. The lack of physical access to the hardware did not significantly impact my ability to test vSAN. The software-based tools provided by VMware allowed me to simulate physical failures and test the resilience of the cluster.

5. The process of resyncing objects due to “compliance” was seamless and efficient, providing peace of mind that the data was safe and secure.

Conclusion

In conclusion, remote POC testing for vSAN is a valuable tool for cloud architects and infrastructure admins looking to test the resilience of their vSAN clusters. The vSAN Disk Fault Injection script provides a powerful way to simulate disk failures and test the cluster’s ability to recover from failures.

While there are challenges associated with remote POC testing, such as limited visibility into the test environment and reliance on software-based tools, these challenges can be overcome with careful planning and execution. By leveraging remote POC testing, cloud architects and infrastructure admins can identify potential issues and bottlenecks before deploying vSAN in a production environment, ensuring a successful implementation and minimizing downtime.

Unlock Scalability and Cost Savings with Storage Spaces Direct for VMware vSphere Environments

Storage Spaces Direct (S2D) is a software-defined storage (SDS) solution that allows Windows Server nodes to pool their local storage drives into a single, highly available storage pool. This provides a flexible and scalable storage solution for virtual machines in a VMware vSphere environment. In this blog post, we will explore the benefits and best practices of using S2D in a vSphere environment, as well as some sample configurations.

Benefits of Using S2D in a vSphere Environment

———————————————–

There are several benefits to using S2D in a vSphere environment, including:

### High Performance

S2D provides high performance storage for virtual machines, allowing them to run at optimal levels.

### High Availability

S2D provides a highly available storage solution, ensuring that virtual machines have access to their data even in the event of a failure.

### Scalability

S2D can be scaled up or down as needed, allowing for flexible storage solutions that can grow with your environment.

### Ease of Deployment and Management

S2D is easy to deploy and manage, allowing administrators to focus on other tasks.

Best Practices for Using S2D in a vSphere Environment

—————————————————

When using S2D in a vSphere environment, there are a few best practices that you should follow:

### Use Cluster-Across-Box (CAB) Deployment

CAB deployment is the recommended method for deploying S2D in a vSphere environment. This involves installing S2D on a group of physical servers and then accessing the S2D storage pool from the vSphere cluster.

### Use Virtual Machine Cluster-Across-Box (vCAB) Deployment When Necessary

While CAB deployment is the recommended method, there may be situations where vCAB deployment is more appropriate. This involves installing S2D on a group of virtual machines and then accessing the S2D storage pool from the vSphere cluster.

### Use Multiple Storage Pools

It is recommended to use multiple storage pools in a vSphere environment, each with its own set of performance characteristics. This allows administrators to optimize storage for different workloads.

Sample Configuration 1: CAB Deployment with Two Storage Pools

———————————————————

In this sample configuration, S2D is deployed on a group of physical servers and two storage pools are created, one with high performance disks and another with capacity-oriented disks.

Sample Configuration 2: vCAB Deployment with Three Storage Pools

———————————————————–

In this sample configuration, S2D is deployed on a group of virtual machines and three storage pools are created, one with high performance disks, another with capacity-oriented disks, and a third with a combination of both.

Sample Configuration 3: CAB Deployment with Multiple Storage Pools

———————————————————–

In this sample configuration, S2D is deployed on a group of physical servers and multiple storage pools are created, each with its own set of performance characteristics. This allows administrators to optimize storage for different workloads.

Conclusion

———-

Storage Spaces Direct (S2D) is an excellent option for providing shared storage for virtual machines in a VMware vSphere environment. With its high performance, high availability, scalability, and ease of deployment and management, S2D is a flexible and cost-effective storage solution that can help to simplify your data center operations. By following best practices and using sample configurations, you can easily deploy S2D in your vSphere environment and start enjoying the benefits it provides.

Embracing a New Era for VMware Communities

A New Era for VMware Communities: Embracing the Future with Broadcom

On May 6, 2024, VMware Communities is set to embark on a new journey as we transition to a new platform under Broadcom Communities. This significant step forward marks a new era for our community, and we couldn’t be more thrilled about the opportunities that lie ahead.

As you may have heard, Broadcom recently acquired VMware, and this transition is part of the larger integration process. By joining forces with Broadcom, we can leverage their extensive resources and expertise to take our community to new heights. We’re excited to explore the many possibilities that this partnership presents, and we’re confident that our community will continue to thrive under the Broadcom umbrella.

For our loyal readers and subscribers, rest assured that this transition will not disrupt your access to our content. Your subscription information will be transferred seamlessly to the new platform, and you’ll continue to receive the same high-quality content that you’ve come to expect from us. Our commitment to providing valuable insights and updates on all things VMware remains unchanged, and we look forward to continuing our journey with you.

We understand that change can be difficult, but we assure you that this transition is a positive step forward for our community. With Broadcom’s support, we’ll have access to more resources, better tools, and enhanced features that will enable us to deliver even more value to our members. We’re eager to explore the new opportunities that this partnership presents and to continue serving our community with excellence.

We want to take a moment to thank each and every one of you for your loyalty and support over the years. Your engagement and participation have been instrumental in making VMware Communities the vibrant and thriving community that it is today. We’re honored to have such an incredible group of members, and we look forward to continuing our journey together.

So, what can you expect from us in this new era? First and foremost, you can expect the same high-quality content that you’ve come to rely on from us. We’ll continue to deliver valuable insights, updates, and best practices on all things VMware, and we’ll explore new topics and areas of interest as well. We’re excited about the possibilities that this partnership presents, and we’re confident that our community will continue to thrive under the Broadcom umbrella.

In addition to our regular content offerings, you can also expect to see some exciting new features and resources on our platform. With Broadcom’s support, we’ll have access to more advanced tools and technologies that will enable us to deliver even more value to our members. We’re eager to explore these new opportunities and to continue innovating and pushing the boundaries of what’s possible in our community.

We’re thrilled about the future of VMware Communities under Broadcom, and we’re excited to embark on this new journey together. Thank you for your loyalty and support, and we look forward to continuing to serve our community with excellence.

NGINX Sprint 2020

Next week, the NGINX Sprint 2020 is happening and it’s shaping up to be an exciting event! If you’re wondering what the deal is or should you register, here are some reasons why you should join in on the fun.

First off, there will be Live Demos where you can learn how to build an app in under two hours. This is a great opportunity to learn new skills and follow the same principles at your own pace afterwards. Additionally, delegates from Tech Field Day will be participating in the daily wrap-up each day of the sprint, providing valuable insights and discussions.

But wait, there’s more! The highlight of the event has got to be the Hackathon, co-sponsored by AWS. This kicks off on September 14th (my birthday, btw), so don’t miss out! There is a separate registration for the hackathon, so be sure to sign up as soon as possible.

The NGINX Team has also provided a breakdown of the event, which you can check out here. They’ve got a great lineup of speakers and sessions lined up, including a session on machine learning and artificial intelligence.

Overall, this is an amazing opportunity to learn, network, and have fun with like-minded individuals in the tech industry. So, what are you waiting for? Register now for free and join us at the NGINX Sprint 2020! See you there!

ESXi-Arm Fling 1.10 Refresh

As a developer, you understand the importance of keeping your tools and technologies up to date to ensure optimal performance and security. This is especially true when it comes to virtualization platforms like Fling, which are critical components of many modern software development workflows.

Recently, the team at VMware has released an updated version of Fling that includes several exciting new features and improvements. In this blog post, we’ll take a closer look at some of the key enhancements you can expect to find in the latest version of Fling, and how they can help you improve your development workflows.

Improved RK3566 Support

One of the most significant updates in the latest version of Fling is improved support for the RK3566 processor. This powerful SoC (System-on-Chip) is widely used in a variety of devices, from smartphones and tablets to smart home devices and more. With the new version of Fling, you can expect better performance and stability when running your applications on RK3566-based devices.

Virtualization Improvements

Another key area of improvement in the latest version of Fling is virtualization. Virtualization is a critical feature for developers who need to test their applications on multiple platforms and environments. With the new version of Fling, you can expect faster and more reliable virtualization performance, thanks to a number of optimizations and improvements under the hood.

For example, the latest version of Fling includes improved support for GPU acceleration, which can help speed up your virtualization workloads. Additionally, the team at VMware has made significant optimizations to the memory management code, which can help reduce memory usage and improve overall system performance.

Other Enhancements

In addition to the improved RK3566 support and virtualization enhancements, the latest version of Fling includes a number of other exciting new features and improvements. For example:

* Improved networking performance: With the new version of Fling, you can expect better network performance when running your applications. This is thanks to a number of optimizations and improvements to the networking code, which can help reduce latency and improve overall network throughput.

* Enhanced security features: The latest version of Fling includes a number of new security features that can help protect your applications and data from unauthorized access and malicious attacks. For example, the team at VMware has added support for secure boot, which can help ensure that only trusted software is able to run on your devices.

* Better compatibility with popular development tools: The latest version of Fling includes improved compatibility with a wide range of popular development tools, such as Git and SVN. This can help streamline your development workflows and make it easier to collaborate with your team.

Conclusion

In conclusion, the latest version of Fling from VMware is a must-have for any developer who relies on virtualization in their workflows. With improved RK3566 support, enhanced virtualization performance, and a number of other exciting new features and improvements, this release is sure to help you take your development to the next level.

So why wait? Upgrade your older Fling installations today and start taking advantage of these exciting new features and improvements. With the latest version of Fling, you can expect better performance, improved security, and a more streamlined development workflow. Don’t miss out – download the latest version of Fling now and start developing like never before!