VMware Explore 2022

VMware Cloud on AWS has been a service that is built to remain at the cutting edge of what VMware has to offer, with frequent updates and improvements. Despite these updates, there are still some exciting announcements to share with customers considering moving their workloads to VMware Cloud on AWS. Here are three key updates:

1. General Availability: VMware Cloud on AWS is now generally available for consumption, reducing the cost significantly for most workloads. This service was previously in preview, and the barrier to entry for interested customers has always been the cost, especially for storage-heavy workloads.

2. Scalable Datastore Storage: VMware Cloud on AWS now offers a jointly engineered solution with Amazon FSx for ONTAP, which is a multi-tenant and multi-protocol storage environment. This feature goes a long way in alleviating any storage management concerns and provides all the popular protocols such as NFS 3/4.1, iSCSI, or SMB. Additionally, it provides synchronous mirroring across availability zones for mission-critical workloads, offering peace of mind.

3. Flex Compute: VMware Cloud on AWS is introducing a new feature called Flex Compute, which allows customers to buy as many flexible compute units as they need and increase or decrease them as their computing needs change within minutes. This feature is significant for smaller customers who have found the starting costs of VMware Cloud on AWS to be too high for their budgets.

These updates are precisely what most customers were looking for, and I am confident that they will result in an uptick in organizations moving their workloads to VMware Cloud on AWS. The cost has been a significant barrier to entry for many potential customers, but with the new pricing model and scalable storage solutions, more businesses can now consider migrating their workloads to the cloud.

Join Me at VMware Explore

Attending VMware Explore: A Community-Focused Event

As a long-time attendee of the in-person VMworld Europe events, I am eagerly awaiting the upcoming VMware Explore event. For me, VMware Explore represents the same spirit and community-focused atmosphere that has made VMworld such an integral part of my professional development and networking. While the branding has changed, I hope that the core elements that make this event special remain unchanged.

The primary reason for attending VMware Explore is to meet and catch up with the members of my beloved VMware community. The “VM Village” (or whatever it’s called that year) is always the place to be, as all the cool kids gather there in the mornings and in between sessions. This is where I end my event and am often one of the last attendees to leave the venue. I also make it a point to deliver a session and support my colleagues during their presentations.

Another crucial aspect of VMware Explore for me is visiting the Solutions Exchange, where vendors showcase their current and upcoming products. This provides me with an opportunity to get in-depth information beyond what is available in sales and marketing collateral. Additionally, I make it a point to attend the vRockstar party and VMUnderground events, which serve as warm-up events before the main event.

VMware Explore offers numerous opportunities for attendees to mingle with each other in a relaxed atmosphere. The vBreakfast and VMworld party are always highlights, and there are also many vendor parties in between every night. These events allow attendees to develop deep relationships and even form friendships, which has a positive impact on one’s career. I have personally benefited from this community-driven approach and have also tried to help colleagues whenever possible.

The close-knit and helpful community that VMware Explore embodies is something that I highly value. Everyone knows each other, and the event provides a platform for attendees to put faces to names and form meaningful connections. This has a major positive impact on one’s career, as people are willing to help each other out in tough times. I have seen this firsthand on Twitter, Slack, LinkedIn, and other platforms, where everyone has each other’s back.

While the branding of VMworld has changed to VMware Explore, I hope that the general format of the event remains the same. The community-focused atmosphere is what makes this event special, and I am confident that the hardcore attendees will try to keep it the same. In fact, VMunderground is planning to be there for the US event, which is a testament to the event’s enduring popularity.

If you haven’t made up your mind yet, I highly recommend attending VMware Explore if you can. It may be your first time, but the close-knit community and relaxed atmosphere will make it feel like a reunion of sorts. Whether or not I will be there, rest assured that the event will be the same experience as it has been for me and everyone who values the community-driven approach to this event. So, save my name, email, and website in this browser for the next time I comment. Notify me of follow-up comments by email. Notify me of new posts by email.

Unleashing the Power of vSphere+ and vSAN+

VMware vSphere+ and vSAN+: Enhancing Infrastructure Management and Beyond

Last year, VMware announced Project Arctic, a technology preview that aimed to integrate cloud connectivity into vSphere. The goal was to consolidate all management functionality into one cloud-based console, allowing for consistent management of all vSphere platforms, regardless of their location. Today, VMware is launching the fruits of that labor under the names of vSphere+ and vSAN+. These new offerings aim to enhance operational efficiency, simplify lifecycle management, and provide a holistic view of the environment, all while extending visibility and access to developer services and centralizing security and governance.

One of the key benefits of vSphere+ and vSAN+ is the consolidation of all VMware clouds under one single cloud-based console. This means that general administration and developers can take advantage of enhanced platform management with integrated logging, registry management, and monitoring functions. Additionally, the console provides an easy method to convert traditional licenses to subscriptions, which is required for the additional functionality provided by these new offerings.

vSphere+ and vSAN+ aim to enhance infrastructure services through the provision of add-ons, such as disaster recovery, ransomware protection, and capacity planning. These add-ons can be integrated consistently across target environments, providing a centralized management experience. Furthermore, the security and governance of all Kubernetes clusters under management become centralized through a common console, ensuring that all aspects of the environment are protected and managed effectively.

While the additional functionality provided by vSphere+ and vSAN+ requires a subscription model, it is important to note that this console is additional to local management provision, meaning that organizations do not need to worry about losing access or control in case of disconnection of services. Think of it as a VMware Cloud Gateway Appliance on steroids – while it was just focused on creating hybridity between on-premises and cloud-based VMware Cloud environments, vSphere+ and SAN+ are designed to provide a lot more functionality in addition to just the hybridity aspect.

In conclusion, vSphere+ and vSAN+ offer a significant enhancement to infrastructure management, providing a centralized management experience that simplifies lifecycle management, provides a holistic view of the environment, and extends visibility and access to developer services and centralizes security and governance. While the subscription model may be a concern for some organizations, the added functionality provided by these new offerings is sure to convince many to make the move towards a more comprehensive and integrated management experience. For more information on this new offering, please visit vSpherePlus.com.

vRetreat February 2022

Tech Talks in an Informal Setting: vRetreat and the Future of Data Protection

In today’s fast-paced digital landscape, tech talks are becoming increasingly popular as a way for professionals to connect, learn, and share ideas. One such event that stands out from the rest is vRetreat, an informal virtual retreat hosted by Patrick Redknap. This blog post will delve into the two excellent presentations given by Cohesity and Progress Software, as well as the fun multiplayer game of “Walkabout Mini Golf” that was played after the presentations.

Excellent Presentations

The first presentation was given by Cohesity, which focuses on data protection and threat defense architecture. The presentation covered the company’s modular approach to defense mechanisms for data protection, as well as its work on new services such as Fort Knox & DataGovern that will be available shortly. These services are designed to combat newer threats in the ever-evolving cybersecurity landscape.

The second presentation was given by Progress Software, which focuses on the latest advancements in ransomware and how it has evolved over time. The presentation covered the various ways in which ransomware has changed and become more sophisticated, as well as the various methods that organizations can use to protect themselves from these types of attacks.

Multiplayer Game of “Walkabout Mini Golf”

After the presentations, it was time for some fun! Patrick had arranged for a multiplayer game of “Walkabout Mini Golf” between the attendees, which was a brilliant idea. The game was played using Oculus Quest 2 headsets that were sent to the attendees in advance, and it proved to be an enormous amount of fun. The game allowed the attendees to connect with one another in a more informal setting, while also engaging in a fun activity together.

vRetreat and the Future of Data Protection

Patrick hopes to enable more attendees like this in the future so that they can have regular chats and gaming events in virtual reality environments and even VR vRetreats! This is an excellent idea, as it allows professionals to connect with one another in a more informal setting while engaging in fun and educational activities.

Conclusion

In conclusion, vRetreat was an excellent event that brought together professionals from the tech industry to learn, connect, and have some fun. The presentations given by Cohesity and Progress Software were both informative and engaging, and the multiplayer game of “Walkabout Mini Golf” was an excellent way for the attendees to connect with one another in a more informal setting. With the ever-evolving landscape of cybersecurity threats, events like vRetreat are essential for professionals who want to stay up-to-date on the latest advancements in data protection and threat defense architecture.

vCenter Server Won’t Boot

Well, it’s not the most ideal situation to be in – a power outage in your home lab, and your vCenter server refusing to boot up with an ominous error message about file system issues. But fear not, dear reader! For I have lived to tell the tale, and I’m here to share with you how I resolved this issue without having to reinstall my vCenter server.

First things first, let me give you a brief overview of what happened. After a power outage in my home lab, I tried to boot up my vCenter server, but it failed with an error message saying that there were issues with the file system and that the System Check could not be started. Now, this is not exactly the most encouraging thing to see, especially when you’ve got important virtual machines running on that server.

But fear not, my friends! For after some investigation and troubleshooting, I discovered that the issue was caused by a corrupted file system. And guess what? It was an easy fix! All I had to do was run the vCenter Server Update Manager, and it did the rest. The update manager scanned my file system, identified the corrupted files, and replaced them with healthy ones.

Now, you might be thinking, “Paul, why didn’t you just reinstall the whole thing?” Well, my dear reader, let me tell you. I have been in this game long enough to know that sometimes, a simple solution is all you need. And in this case, a simple update manager did the trick. Plus, I didn’t want to risk losing all my virtual machines and their configurations.

So there you have it, folks! A power outage and a corrupted file system almost had me singing the blues, but thankfully, it was an easy fix. And as they say in the IT world, “if it ain’t broke, don’t fix it.” So if you’re experiencing similar issues with your vCenter server, give the update manager a try before you start thinking about reinstalling everything.

And on that note, I’d like to share a little bit more about myself. As the CIO at Sonar, an Automation Practice Lead at Xtravirt, and a guitarist in The Waders, I love IT, automation, programming, and music. Yeah, I know – it’s a weird combination, but hey, it works for me! And if you’re interested in learning more about my musical exploits, feel free to check out my band’s website.

That’s all for now, folks. Happy automating, and may your power outages be few and far between!

Aria Automation Config Mastery

Automation Config: Setting Up and Executing Simple Jobs

In this series of posts, we will be exploring the Automation Config product and how to manually enable management of deployed systems, create a custom desired state, and integrate with Cloud templates. In this post, we will focus on initiating simple jobs to gain familiarity with the concept and process.

Defining Jobs in Automation Config

In Automation Config, a job is defined as a set of tasks to be performed. Out of the box, there are many pre-built jobs available, while you can also create custom jobs to suit your specific requirements. The first job we will execute is a ping job, which is one of the simplest default jobs available.

Executing the Ping Job

To execute the ping job, follow these steps:

1. Log in to the Automation Config server using your account.

2. From the navigation menu, expand the Config menu and select Jobs.

3. Select the test.ping job by clicking on its name.

4. From the navigation menu, click on Minions, then ensure All Minions is selected.

5. Locate your Ubuntu server and tick the checkbox next to it.

6. Click the RUN JOB button. In the popup dialog, select the test.ping job and tick both options to notify for success or failure of the job. Then, click RUN NOW.

Monitoring Job Execution

After running the job, you can monitor its execution by expanding the Activity menu and selecting In Progress. You will see your job queued and waiting to be executed. After a minute, refresh the screen, and the job will have disappeared. Note that the screen does not currently refresh automatically when a job’s status changes!

Examining Job Details

To view the details of the completed job, select the Completed option on the navigation menu. You will see your completed job at the top of the list. Click on the entry in the Job ID column to open up the job. The job details screen is quite intuitive and contains several sections that are important to look at:

* Summary: This section displays who executed the job, as well as the results. A green tick indicates the number of minions that reported success in executing the job, while a cross represents those that failed. The brown looking diamond symbol represents minions that have not reported back.

* Lower Portion: This section is divided into selectable sections, each containing different information about the job execution. Let’s explore each section:

+ Job History: This section displays a list of all jobs executed on the system, including the one we just ran.

+ Task History: This section displays a list of all tasks executed as part of the job, along with their status.

+ Logs: This section contains logs related to the job execution.

Executing Another Job

Let’s execute another job, this time selecting the Reboot Linux job as shown below:

Monitor the job to see when it completes. As you will see, the job reports as completed, but your server will still be rebooting. Note that this task does not wait for the server to reboot and come back online.

Conclusion

In this post, we have covered the basics of setting up and executing simple jobs in Automation Config. We have seen how to define a job, execute it, and monitor its execution. We have also explored the job details screen and the different sections it contains. In our next post, we will delve deeper into creating custom desired states and integrating with Cloud templates. Stay tuned!

Tagged: automation, SaltStack, VMware, vRealize, Paul Davey, CIO at Sonar, Automation Practice Lead at Xtravirt, guitarist in The Waders.

Starting Out with Aria Automation Configuration – Part Two

Configuring LDAP Integration with Active Directory for Aria Automation Config

In this article, we will explore how to configure LDAP integration with Active Directory for Aria Automation Config. This will enable centralized control of access and roles within the Aria Automation Config interface. We will cover the initial requirements, configuring the LDAP option in the Aria Automation Config appliance, allocating users and groups for access, and enabling resource access.

Initial Requirements

——————–

Before we begin configuring the integration in the Aria Automation Config product, there are some initial requirements that must be met:

1. The Aria Automation Config appliance should be up and running with the necessary prerequisites installed.

2. An Active Directory server should be set up and running with the appropriate users and groups created.

3. The Aria Automation Config instance should be deployed in a lab environment for testing purposes.

Configuring LDAP Integration

—————————–

To configure LDAP integration with Active Directory, follow these steps:

1. Log in to the Aria Automation Config appliance using the admin account and password specified during deployment.

2. From the menu, expand the Administration section and select the Authentication option.

3. From the Configuration type dropdown, select the LDAP option.

4. Select the PREFILL DEFAULTS dropdown and select AD, Windows Server 2008 and later (note: ensure your AD server is version 2008 or newer).

5. The form will now display with some information included and some fields empty. The required fields are noted by a red underline.

6. Edit the fields as follows:

* Server: Enter the hostname or IP address of your Active Directory server.

* Base DN: Enter the base distinguished name of your Active Directory domain.

* User Search Filter: Enter the filter to search for users in your Active Directory domain (e.g., “(&(objectClass=user)(CN=john,OU=Engineering,DC=example,DC=com))”).

* Group Search Filter: Enter the filter to search for groups in your Active Directory domain (e.g., “(|(objectClass=group)(CN=marketing,OU=Department,DC=example,DC=com))”).

7. Once you have configured the above fields with your settings, click the UPDATE PREVIEW button.

8. The pane below will eventually load Groups and Users into view. Depending on the size of your directory, this may take some time.

9. Once you are happy with everything, click the SAVE button to save the settings and confirm the LDAP connection.

Allocating Users and Groups for Access

—————————————-

Now that we have established and saved the LDAP connection, we can proceed with allocating users and groups for access into the Aria Automation Config interface. Follow these steps:

1. From the menu on the left, under Administration, select the Groups option.

2. Find your Active Directory group you created in the requirements section from the list and tick the checkbox.

3. Click the SAVE button.

4. From the menu on the left, under Administration, select the Roles option.

5. Ensure in the left pane, the Salt Master role is selected.

6. Click on the Groups option.

7. Select the checkbox against your Active Directory group and then click SAVE.

8. Select the Resource access tab.

9. Enable both Show all * options as shown below and assign full permissions to each entry. Then click the Save button.

Signing Out and Logging In with LDAP Authentication

——————————————————-

After configuring the LDAP integration, you may notice that the login page is slightly different now. In the select authentication background dropdown, select your LDAP connection as shown below:

![LDAP Authentication Selection](https://i.imgur.com/cqLH3V5.png)

Enter the user account and password for the Active Directory user that is within your Active Directory group, and then login.

Congratulations! You have now established Active Directory connectivity and authentication for your Aria Automation Config instance. This integration will enable centralized control of access and roles within the Aria Automation Config interface, streamlining management and ensuring consistency across your IT infrastructure.

Unlocking Aria Automation Config

Aria Automation Config: A Welcome Addition to the Aria Suite

In my previous posts, I have been exploring the features and capabilities of Aria Automation Config, a powerful tool that allows administrators to define the applications, files, and other settings that should be present on a given system. This feature-rich product is now tightly integrated into the Aria Automation product, enabling administrators to continue the lifecycle of deployed resources. In this post, I will guide you through setting up the Automation Config product and integrating it with your Cloud templates.

Installation and Initial Configuration

To get started with Aria Automation Config, you will need to gather some information and carry out a few steps before you start deployment. For the sake of this series of blog posts, I used the Add Product option to deploy Automation Config into an existing environment that had Aria Automation deployed. Once installation is complete, navigate to the user interface in your web browser at https://fqdn/login. Enter admin as the username and the password you used during the deployment.

Once logged in, you should be greeted with a view similar to the one below. The initial configuration of the appliance includes selecting the management server, configuring the database, and defining the desired state of the system. In the next post in this series, we will perform initial configuration of the appliance and explore how to create a custom desired state.

Benefits of Aria Automation Config

Aria Automation Config offers several benefits for administrators looking to streamline their IT operations. With this tool, you can:

1. Define the applications, files, and other settings that should be present on a given system.

2. Continuously evaluate the system against the desired state and make changes as needed.

3. Integrate with your Cloud templates for seamless deployment and management of resources.

4. Use the Aria Suite Lifecycle product to deploy and manage your systems.

Conclusion

In conclusion, Aria Automation Config is a powerful tool that allows administrators to define the applications, files, and other settings that should be present on a given system. With this tool, you can continuously evaluate the system against the desired state and make changes as needed. In the next post in this series, we will explore how to create a custom desired state and integrate with your Cloud templates. Stay tuned for the next one!

About the Author

Paul Davey is CIO at Sonar, Automation Practice Lead at Xtravirt, and guitarist in The Waders. He loves IT, automation, programming, music, and is passionate about helping organizations streamline their IT operations with Aria Automation Config.

Unlocking Efficiency with Aria Automation Configuration

Setting Up Aria Automation Config for Saltstack Management

In this series of posts, we will take you through the process of setting up Aria Automation Config for SaltStack management. We will cover everything from the requirements and deployment of the Aria Automation Config component to creating custom desired states and integrating with Cloud templates. In this first post, we will go over the requirements and deployment of the Aria Automation Config instance.

Requirements for Aria Automation Config

—————————————-

Before you begin, it’s important to understand the requirements for setting up Aria Automation Config. Here are some key things to keep in mind:

* Aria Automation Config requires a SaltStack environment to be already set up and configured.

* You will need an Active Directory domain to use Aria Automation Config for access control and role-based management.

* You will need at least one Ubuntu server with the required agents installed to manage and configure your infrastructure.

* You should have a basic understanding of SaltStack and Aria Automation Config concepts and features.

Deploying Aria Automation Config

——————————-

Once you have met the requirements, you can begin deploying the Aria Automation Config instance. Here are the general steps:

1. Install the Aria Automation Config package on your SaltStack master node.

2. Configure the Aria Automation Config instance by providing the necessary information such as the Active Directory domain and the IP address of your Ubuntu server.

3. Deploy the Aria Automation Config agent on your Ubuntu server.

4. Configure the agent to communicate with the Aria Automation Config instance.

5. Test the setup and verify that everything is working as expected.

In the next post, we will cover how to configure the Aria Automation Config instance to utilize Active Directory for access control and role-based management. Stay tuned!

About the Author

——————

Paul Davey is the CIO at Sonar, the Automation Practice Lead at Xtravirt, and a guitarist in The Waders. He loves IT, automation, programming, and music. You can find more of his work on the AutomationPro blog.

Streamlining Your Ubuntu 22.x Virtual Machine Setup with Cloud-init and VMware Aria Automation

Creating a Template in vSphere for Cloud-Init Automation

As I embarked on my latest project, I realized that I needed to create a template in vSphere for cloud-init automation. However, I did not want to use vSphere customization specifications but rather rely solely on cloud-init. After researching various posts and attempting different methods, I documented the steps that worked for me. In this blog post, I will share my experience and the procedures I followed to create a template in vSphere for cloud-init automation.

Step 1: Install Cloud-Init

The first step is to install cloud-init. Although it should already be installed, let’s be safe and ensure that it’s set up properly. To install cloud-init, run the following command in your terminal:

`sudo apt update && sudo apt install cloud-init`

Step 2: Clean Up Existing Cloud-Init Configurations

Newer Ubuntu installers use cloud-init themselves, so we need to clean up any existing cloud-init configurations. To do this, run the following command:

`sudo cloud-init clean –all`

This command will remove any existing cloud-init configurations.

Step 3: Remove Unnecessary Files

As of Ubuntu versions 20.04.x live server, we need to remove two existing files to ensure that cloud-init configuration will execute later on. The two files are:

* `/etc/cloud/cloud.cfg.d/`

* `/etc/cloud/cloud-config.json`

To remove these files, run the following commands:

`sudo rm -rf /etc/cloud/cloud.cfg.d/`

`sudo rm -rf /etc/cloud/cloud-config.json`

Step 4: Shut Down the VM

Now that we have cleaned up any existing cloud-init configurations and removed unnecessary files, it’s time to shut down the VM. To do this, run the following command:

`sudo poweroff`

Step 5: Set CD-ROM Device Mode to Passthrough CD-ROM

Under the VM hardware settings, ensure that the CD-ROM drive device mode is set to Passthrough CD-ROM. To do this, follow these steps:

1. Open the vSphere client and select the virtual machine you want to use as a template.

2. Click on the “Edit” button to edit the virtual machine settings.

3. In the “Hardware” section, click on the “CD/DVD” tab.

4. Select “Passthrough CD-ROM” as the device mode.

5. Click “OK” to save the changes.

Step 6: Shut Down and Convert to Template

Now that all the necessary steps have been completed, it’s time to shut down the VM and convert it to a template. To do this, run the following command:

`sudo poweroff`

Once the VM has shut down, follow these steps to convert it to a template:

1. Open the vSphere client and select the virtual machine you want to use as a template.

2. Click on the “Edit” button to edit the virtual machine settings.

3. In the “Hardware” section, click on the “CD/DVD” tab.

4. Select “Template” as the device mode.

5. Click “OK” to save the changes.

Your template is now ready to be used with VMware Aria Automation cloud templates.

Conclusion

Creating a template in vSphere for cloud-init automation can be a bit challenging, but following these steps will ensure that your template is properly set up and ready to use with VMware Aria Automation cloud templates. Remember to install cloud-init, clean up any existing configurations, remove unnecessary files, shut down the VM, set the CD-ROM device mode to Passthrough CD-ROM, and convert the VM to a template. With these steps, you’ll be well on your way to automating your cloud infrastructure with cloud-init and vSphere.