Packer Build Failure

I recently had a peculiar issue while creating a new Packer deployment set with version 1.6.2. I encountered some strange issues where the Windows Server 2019 ISO on my vSphere datastore would randomly not connect. I must admit, I scratched my head a bit on this one. After conducting some research through Google (aka “Google-Fu”), I found an issue that is going to be resolved in the next release of Packer. The problem appears to be related to having characters other than letters or numbers in the datastore name that held my ISO.

To resolve the issue, I simply removed the hyphen (-) character from my datastore name, and everything worked fine! It’s interesting how sometimes the simplest of solutions can be overlooked. This experience reminded me of the importance of double-checking even the most trivial aspects of our work, as they can sometimes make all the difference in resolving issues efficiently.

As a CIO at Sonar and Automation Practice Lead at Xtravirt, I am passionate about IT, automation, programming, and music. As a guitarist in The Waders, I love how technology and creativity can intersect to create something beautiful and functional.

This experience with Packer has reinforced the value of continuous learning and exploration in our field. It’s essential to stay up-to-date with the latest tools and technologies, as they can often help us overcome challenges more efficiently. In addition, it’s crucial to share our experiences and knowledge with others, as this can help create a collaborative and supportive community that benefits everyone involved.

In conclusion, my recent experience with Packer has taught me the importance of paying close attention to even the smallest details and the value of sharing our experiences with others. By doing so, we can continue to improve our craft and provide the best possible solutions for our clients and colleagues.

Terraform in Action with VMware vRA 8.x

Playing with Terraform and the vRA 8 Provider

As an automation enthusiast, I’m always on the lookout for new tools and technologies to help streamline my infrastructure deployments. Recently, I’ve been experimenting with Terraform and the vRA 8 provider, and I have to say, it’s a great start! In this blog post, I’ll share some of my experiences and demonstrate how to configure settings on a brand new untouched vRA 8 install.

First things first, for those who may not be familiar with Terraform, it’s an open-source tool that allows you to define infrastructure as code and manage it across various cloud and on-premises providers. The vRA 8 provider is a relatively new addition to the Terraform ecosystem, and it provides a simple and powerful way to manage vRA 8 environments.

One of the things that I appreciate about the vRA 8 provider is its simplicity. Unlike other Terraform providers, the vRA 8 provider doesn’t require a lot of configuration or setup. You can simply install it and start using it right away. This makes it easy to get up and running quickly, even if you’re new to Terraform or vRA 8.

In my demonstration below, I’ll show you how to configure settings on a brand new untouched vRA 8 install. The first thing you need to do is install the vRA 8 provider:

“`

$ terraform install aws-vra

“`

This will install the vRA 8 provider and make it available for use in your Terraform configurations. Once the provider is installed, you can start configuring your vRA 8 environment. Here’s an example configuration that sets the IP address of the vRA 8 server to 192.168.0.100 and enables SSH access:

“`

provider “aws-vra” {

region = “us-east-1”

}

resource “aws-vra::server” “example” {

name = “example-server”

image = “VMware-ESXi-7.0.2-0.45329-eng-authenticated”

instance_type = “t2.micro”

vcpu_count = 2

memory_mb = 1024

ip {

cidr_block = “192.168.0.100/24”

}

storage_size_gb = 30

storage_count = 1

ssh_key_name = “my-ssh-key”

}

“`

This configuration creates a new vRA 8 server with the specified name, image, instance type, vCPU count, memory, IP address, and storage size. It also enables SSH access using a specific SSH key.

Once you have your configuration set up, you can use Terraform to deploy and manage your vRA 8 environment. Here’s an example of how to deploy the previous configuration:

“`

$ terraform apply

“`

This will provision the vRA 8 server with the specified settings. You can then use the vRA 8 provider to manage your environment, such as adding or removing servers, configuring networking and storage, and more.

Overall, I’m very impressed with the vRA 8 provider for Terraform. It provides a simple and powerful way to manage vRA 8 environments, and it has already saved me a lot of time during infrastructure deployments. If you haven’t given it a go yet, I highly recommend checking it out. Let me know what you think by hitting me up on Twitter (@pauldavey_79).

Mastering Puppet on vSphere

Puppet Learning VM: A Comprehensive Guide to Getting Started

If you’re interested in learning about Puppet and its powerful automation capabilities, the Puppet Learning VM is an excellent resource to take advantage of. Provided free of charge by Puppet, this virtual machine (VM) offers a comprehensive environment for you to explore and learn about the Server to Client relationships that Puppet offers. In this blog post, we’ll dive into the details of the Puppet Learning VM, its features, and how to get started with it.

What is the Puppet Learning VM?

The Puppet Learning VM is a free virtual machine provided by Puppet that allows you to explore and learn about their automation platform. The VM includes Puppet Enterprise, Puppet Server, Bolt, and a variety of virtual clients, all pre-configured and ready for you to use. This provides an excellent opportunity for beginners to get started with Puppet and learn about its capabilities without any cost.

Features of the Puppet Learning VM

The Puppet Learning VM comes packed with a range of features that make it an ideal environment for learning about Puppet’s automation capabilities. Some of the key features include:

1. Pre-configured Puppet Enterprise and Puppet Server: The VM includes both Puppet Enterprise and Puppet Server, pre-configured and ready for you to use. This allows you to explore the full range of Puppet’s automation capabilities without any hassle.

2. Bolt included: Bolt is Puppet’s standalone automation tool, and it’s included in the Learning VM. This allows you to learn about Bolt’s powerful automation capabilities and how it integrates with Puppet Enterprise and Server.

3. Virtual clients galore: The Learning VM includes a variety of virtual clients, all pre-configured and ready for you to use. These clients allow you to explore the Server to Client relationships that Puppet offers, and how to manage and configure your infrastructure using Puppet’s automation capabilities.

4. Support for Docker: The Learning VM incorporates Docker, allowing you to run containers and explore how Puppet works with Docker. This provides a powerful way to manage and automate your infrastructure, especially in cloud environments.

Getting Started with the Puppet Learning VM

To get started with the Puppet Learning VM, follow these steps:

1. Download the OVA file: Visit the Puppet website and download the OVA file for the Puppet Learning VM. The file is available in OVA format, which is optimized for desktop virtualization products like VMware Workstation or VMware Fusion.

2. Import the OVA file: Once you’ve downloaded the OVA file, import it into your preferred virtualization platform. In my case, I used vSphere to power on the VM.

3. Follow the onscreen instructions: After powering on the VM, follow the onscreen instructions to complete the installation process. This includes selecting your language, agreeing to the terms and conditions, and choosing your installation location.

4. Start exploring: Once the installation is complete, you can start exploring the Puppet Learning VM. The virtual clients are pre-configured and ready for you to use, allowing you to dive straight into learning about Puppet’s automation capabilities.

Conclusion

The Puppet Learning VM is an excellent resource for anyone looking to learn about Puppet and its powerful automation capabilities. With a range of features and pre-configured virtual clients, the Learning VM provides an ideal environment for beginners to get started with Puppet. By following the steps outlined in this blog post, you can start exploring the world of Puppet and begin your journey towards becoming an expert in automation. So what are you waiting for? Download the Puppet Learning VM today and start learning about one of the most powerful automation platforms available!

vRA 8 Postman Samples

Using the vRA8 Postman Sample Request Pack for VMware vRA 8 Testing and Automation

As a CIO at Sonar and the Automation Practice Lead at Xtravirt, I am always looking for ways to improve testing and automation in IT. Recently, I came across the vRA8 Postman Sample Request Pack, which is a collection of 19 example requests for using against a VMware vRA 8 install. This pack can be imported into Postman, a popular API testing tool, and used to test and automate various functionality within vRA8.

In this blog post, I will provide an overview of the vRA8 Postman Sample Request Pack, show how to import it into Postman, and demonstrate how to use it for testing and automation in vRA8.

Overview of the vRA8 Postman Sample Request Pack

———————————————

The vRA8 Postman Sample Request Pack is a collection of 19 example requests that can be used against a VMware vRA 8 install. These requests cover various functionality within vRA8, such as creating and managing resources, deploying and managing applications, and retrieving information about the environment.

The pack includes the following requests:

* Obtain A Bearer Token

* Get Resource By ID

* Create Resource

* Update Resource

* Delete Resource

* Get Application By ID

* Create Application

* Update Application

* Delete Application

* Get Environment By ID

* Create Environment

* Update Environment

* Delete Environment

* Get Server By ID

* Create Server

* Update Server

* Delete Server

* Get Datastore By ID

* Create Datastore

* Update Datastore

* Delete Datastore

How to Import the vRA8 Postman Sample Request Pack into Postman

————————————————————-

To import the vRA8 Postman Sample Request Pack into Postman, follow these steps:

1. Open Postman and click the Import button from the toolbar.

2. Navigate to the location where you downloaded the sample pack and select it.

3. Click Import to import the requests into Postman.

4. Once imported, you will see the collection in the inventory pane.

How to Use the vRA8 Postman Sample Request Pack for Testing and Automation

——————————————————————————

Once you have imported the vRA8 Postman Sample Request Pack into Postman, you can use it to test and automate various functionality within vRA8. Here are some steps to get started:

1. Right-click on the collection folder and select Edit.

2. Select the Variables tab and edit the three variables: supplying the FQDN to your vRA 8 instance, username, and password for authentication.

3. Click Update to save your changes.

4. Execute the Obtain A Bearer Token request to obtain a token.

5. You can now execute any of the requests from the collection successfully.

6. You will find that the bearer token is stored (along with some other output values from requests) under the vRA8_Environment environment.

Tips and Tricks for Using the vRA8 Postman Sample Request Pack

————————————————————-

Here are a few tips and tricks to keep in mind when using the vRA8 Postman Sample Request Pack:

* Make sure to update the variables with your own FQDN, username, and password for authentication.

* The requests in the pack are organized into collections, so you can easily execute related requests together.

* You can use the environment variables to store values that will be used across multiple requests.

* You can use Postman’s built-in debugging tools to troubleshoot any issues that arise during execution.

Conclusion

———-

The vRA8 Postman Sample Request Pack is a valuable resource for anyone looking to test and automate VMware vRA 8 functionality. With this pack, you can easily import a collection of example requests into Postman and start testing and automating various aspects of your vRA8 environment. Remember to update the variables with your own authentication details and take advantage of Postman’s debugging tools to troubleshoot any issues that arise.

Unlocking the Full Potential of VMware vRA8

VMware vRealize 8: Migration and Blueprints

In the world of cloud computing and virtualization, VMware vRealize 8 is the latest and greatest version of the vRealize platform. With its new features and capabilities, many IT professionals are looking to migrate their existing vRealize 7 instances to the newer version. However, migration can be a complex and daunting task, especially when it comes to blueprints. In this blog post, we’ll explore the migration process, what works and what doesn’t, and how to create cloud-agnostic blueprints in vRealize 8.

Migration Assessment Tool

VMware has provided a Migration Assessment Tool to help you identify the components that can be migrated from your vRealize 7 instance to the newer version. The tool discovers various aspects of your vRealize 7 environment and provides a report on what can be migrated. However, it’s important to note that not all components can be migrated directly, and some may require manual intervention or re-working.

Blueprints in vRealize 8

Blueprints in vRealize 8 have undergone a significant overhaul. Gone are the days of creating separate blueprints for each cloud provider. With vRealize 8, you can create a single blueprint that can deploy to multiple cloud providers, including private and public clouds. This approach not only simplifies your management and deployment process but also enables hybrid cloud deployments.

Creating Cloud-Agnostic Blueprints

To create cloud-agnostic blueprints in vRealize 8, you can use YAML code to define your virtual machines, networks, and other components. With less than 40 lines of YAML code, you can deploy a single blueprint to multiple clouds, including vSphere, AWS, Azure, and Google Cloud. This approach not only streamlines your deployment process but also enables your business to take advantage of the benefits of hybrid cloud deployments.

Conclusion

In conclusion, migrating from vRealize 7 to vRealize 8 can be a complex task, especially when it comes to blueprints. However, with the right approach and tools, you can simplify your management and deployment process while also enabling hybrid cloud deployments. By creating cloud-agnostic blueprints, you can future-proof your business and take advantage of the benefits of multiple cloud providers. Remember, it’s not just about migrating what you have now but also considering what your business may need in the future.

VMware vRA8

Configuring MS Azure as a Cloud Zone in vRealize Automation 8

In this blog post, we will explore the process of configuring MS Azure as a cloud zone within vRealize Automation 8 (specifically the Cloud Assembly service). We will assume that you have an MS Azure subscription and know how to login to it. If you don’t, best go find out before you try and follow this blog post!

Step 1: Configuring MS Azure Connection

Login to your MS Azure portal. On the Dashboard click the Subscriptions card. Within the Subscriptions screen, make a note of the Subscription ID that you want to deploy virtual machines and services into.

From the main menu [editor: click the 3 line burger menu in the top left corner], select the Azure Active Directory entry. Click App Registrations. To enable vRealize Automation to carry out tasks within our MS Azure subscription, we need to grant it permissions. We do this by registering an Azure Application.

Step 2: Registering an Azure Application

Register a new application. Specify a name, configure the Support account types (I used the first/default option), and set a Redirect URI. The redirect URI won’t actually be used, so you can point this to any fictitious address you like, just as I have here. Finally, click the Register button.

With our new application registered, make a note of the Directory (tenant ID). Click the Certificates & secrets menu entry.

Step 3: Adding a Client Secret

Add a new client secret. This will be used to allow vRealize Automation 8 to authenticate with MS Azure. Enter a description, select an expiry duration, and then click Add. On the following screen, make a record of the secret. Once you leave this screen, you will not be able to view it again.

Step 4: Granting Permissions to vRealize Automation 8

Navigate back to the main Dashboard and select the Subscriptions card. From there, select the Access Control (IAM) entry. For vRealize Automation 8 to be able to use this MS Azure subscription, we are going to have to grant our new application entries the appropriate permissions through roles. Click the Add button on the Add a role assignment card.

In the following screen, configure our application as a Contributor. Then repeat this process so that our registered application entry has the Contributor, Owner, and Reader roles.

We are now ready to configure our MS Azure subscription as a cloud zone within vRealize Automation 8.

Conclusion

In this blog post, we have covered the process of configuring MS Azure as a cloud zone within vRealize Automation 8. We have seen how to register an application within MS Azure and grant it permissions to access our subscription. With these steps completed, vRealize Automation 8 can now use this MS Azure subscription to deploy virtual machines and services.

Paul Davey is CIO at Sonar, Automation Practice Lead at Xtravirt, and guitarist in The Waders. He loves IT, automation, programming, music, and is passionate about helping others learn and adopt new technologies.

The Mighty Homelab

Home Lab Setup: My Experience with Intel NUC

As a professional in the IT industry, I need a home lab to learn, test, and develop solutions on. My requirements are straightforward: a simple vSphere setup with vRA, vRO, and a Windows Domain Controller providing Active Directory, DNS, and DHCP. After experimenting with various options, I decided to go with an Intel NUC. In this article, I’ll share my experience with the Intel NUC and how it meets my home lab needs.

Why I Chose Intel NUC

I considered a few options, including HP rack servers/tower servers on eBay, but I was looking for something standalone, cost-effective, and with minimal space requirements. The Intel NUC fit the bill perfectly. It’s a compact, powerful device that can run vSphere and other virtualization tools seamlessly.

My Setup

I opted for a single Intel NUC8I5BEH, which comes with a quad-core 8th generation Intel Core i5 Processor. It has a single M.2 slot, 4 USB 3.1 ports, a Thunderbolt port, and a Gigabit LAN port. The NUC supports up to 32GB of DDR4 RAM, but I installed 2 x Crucial DDR4 32GB Dimms, giving me 64GB of RAM. For storage, I purchased a Western Digital 3D NAND SSD and installed it as the primary drive. I also have a spare 2.5″ SSD lying around that I installed as well, giving me nearly a terabyte of usable space.

ESXi 6.7, the latest build at the time of writing, runs smoothly on the NUC. The internal disks were discovered without any fiddling, so they both got formatted into VMFS volumes for the host to use. I created a couple of portgroups and sorted out my networking (basic, flat networking on a standard vSwitch, separate routable network to the rest of my house). Then it was on to deploying some VMs and getting things going.

Performance and Resources

The NUC has plenty of resources to handle my home lab needs. With all the virtual machines powered on and while using vRA to deploy a VM from a template, I still have enough free resources that the NUC isn’t suffering or starved. The NUC’s resources are sufficient for my needs, and it handles everything smoothly.

My home lab setup consists of:

* ESXi 6.7 (latest build at time of writing)

* vRealize Automation (vRA)

* vRealize Orchestrator (vRO)

* Windows Domain Controller providing Active Directory, DNS, and DHCP

All the virtual machines run smoothly, and I have plenty of resources left over for future expansion. The NUC has nearly a terabyte of usable space, thanks to the two internal storage devices.

Conclusion

In conclusion, the Intel NUC is an excellent choice for a home lab setup. It’s compact, powerful, and cost-effective. With plenty of resources available, it handles my vSphere and virtualization needs seamlessly. If you’re looking for a similar setup, I highly recommend considering the Intel NUC.

The Power of Standards

The Importance of Standardization in Automation

In the fast-paced world of technology, automation is a key factor in streamlining processes and increasing efficiency. However, without standardization, automation can become a complicated and error-prone task. Recently, I have been working with a financial industry customer who is all in on Automation, but it became apparent quite early on that they haven’t standardized their processes. In this article, we will explore the differences between a non-standardized process and one that has been standardized, and how much simpler it is to automate a standardized process.

Non-Standardized Process

Let’s take the example of creating a new user account. Currently, there are three ways someone can request a new account: via email, phone, or through a web form. This results in multiple edits and updates before the account is ready for handover, as seen in the yellow blocks below.

![Non-Standardized Process](https://i.imgur.com/fJhKXPZ.png)

As we can see, the information can arrive staggered, which introduces the risk of error. For example, the request received via phone might result in the incorrect spelling of a surname, which then requires more edits to the user account. This staggered process also creates multiple touchpoints for human interaction, which can lead to errors and delays.

Standardized Process

Now let’s compare this process to the same process with some standards applied. All requests for a new user must come in through a form, and this form has data fields that the requester must complete before submitting. This ensures that all data required for a user account arrives at once, and because it is via a typed form, we avoid the issues of misinterpretation that we had earlier with the first & last name spelling.

![Standardized Process](https://i.imgur.com/hMu8XPZ.png)

The new process reduces the number of edits and ultimately results in the requester getting their end product – the user account – much sooner. Additionally, with standardization, we can now enforce a set of rules and agreements that all users or consumers must adhere to, ensuring consistent quality and comparable conclusions across all processes.

Benefits of Standardization

Standardization simplifies the automation task in several ways. Without standards, we would need to account for every scenario that the request could follow, allow for breaks in the process, enable human interaction, and provide error correction. With standards, we can allow automation to generate the username, validate the inputs, and hand over the item to the requester.

When we have standards, estimating the time required for automation also becomes simpler. We have a clearly defined process with a set of inputs (the user details) and an expected outcome (the user account). Once the process is standardized, and automated, changes to this process become more controlled – we can introduce and enforce a review process, which means any changes to the process are agreed within the team and released in an agreed manner.

Conclusion

In conclusion, standardization is essential for successful automation. Without standards, the process becomes complicated and prone to errors. By applying standards, we can simplify the task of automation, reduce the number of edits required, and ultimately deliver a more efficient and effective end product. When we have standards, estimating the time required for automation also becomes simpler, and changes to this process become more controlled.

So, if you’re looking to automate your business processes, don’t forget to standardize them first. It may take a little more effort upfront, but the benefits of simplified automation, improved efficiency, and increased consistency will be well worth it in the long run.

Terraform in Action

As a CIO at Sonar and the Automation Practice Lead at Xtravirt, I have always been fascinated by the potential of automation to streamline and optimize IT operations. One tool that has particularly caught my attention is Terraform, an open-source tool from HashiCorp that allows you to define and manage infrastructure as code.

In this blog post, I would like to share two videos that demonstrate the use of Terraform to manipulate vSphere infrastructure. The first video shows how to use Terraform to provision a new virtual machine (VM) in vSphere, while the second video demonstrates how to use Terraform to update the configuration of an existing VM.

Before we dive into the videos, let me provide some background information on Terraform and vSphere. Terraform is an infrastructure as code tool that allows you to define your infrastructure using a human-readable configuration file. This configuration file describes the desired state of your infrastructure, and Terraform ensures that the actual state of your infrastructure matches the desired state.

vSphere, on the other hand, is a virtualization platform from VMware that allows you to create, manage, and deploy virtual machines (VMs) in a data center environment. vSphere provides a robust set of features for managing VMs, including support for multiple operating systems, network virtualization, and high availability.

Now, let’s take a look at the first video, which shows how to use Terraform to provision a new VM in vSphere. In this video, we will create a new VM with a specific configuration, including the operating system, network settings, and storage options. We will also demonstrate how to attach an existing VM to a new VM, creating a nested virtualization environment.

The second video demonstrates how to use Terraform to update the configuration of an existing VM in vSphere. In this video, we will show how to update the number of CPU cores and memory allocated to a running VM, as well as how to update the network settings and storage options.

Both videos are designed to be easy to follow, even for those with limited experience with Terraform or vSphere. However, I want to emphasize that these videos are just basic examples of what is possible with Terraform and vSphere. With more advanced configuration and scripting, you can achieve even more sophisticated automation tasks.

In conclusion, Terraform provides a powerful and flexible way to manage your vSphere infrastructure, allowing you to define and maintain your desired state of infrastructure using a human-readable configuration file. By mastering Terraform and vSphere, you can automate a wide range of IT tasks, from provisioning new VMs to updating existing configurations. So, grab some popcorn, sit back, and enjoy the videos!

Navigating Licensing Complexity with Ease

This blog post will demonstrate how to apply one or more licenses to vCenter using Terraform. The tutorial will cover creating a module that can be used across multiple vSphere environments.

First, we will create a folder called “configure_licensing” and within it, we will create the following files:

* provider.tf

* provider_variables.tf

* main.tf

* outputs.tf

The provider.tf file defines the provider to use, while the provider_variables.tf file defines what needs to be passed in to enable the provider. The main.tf file is where we will define our resource and the outputs.tf file is where we will define the outputs we want to receive.

In the variables.tf file, we have declared a variable called “licenses” with a type of list(string). This means it expects to receive one or more license keys, wrapped in quotes and comma separated. We wrap the entire entry in ‘[]‘ brackets. For example, we would pass in something like the following:

[“12345-abcde-fghjk-67890-qwerf”, “67890-qwerf-12345-01CU4-987VC”]

In the main.tf file, we have declared a resource type of vsphere_license.licenseKey. According to the documentation, there are two arguments: one for the license key and one for the user who will be assigned the license. We will only specify the required argument, which is the license key.

To add multiple licenses, we can use a built-in method to find the length of the list, or think of it differently, the size of the list. If we passed in two values (two license keys) into this variable, then the length will be 2. Count will be set to the length.

Here is an example of how we can add multiple licenses:

license_key = var.licenses[count.index]

This is similar to my fruitbowl example (despite the very different syntax!). Terraform will set license_key equal to each of the items in our list(string).

We have also declared a number of outputs that we would like sent to the console when the application has been successful. These outputs are assigned names and can be referenced later on.

To run the application, we will pass in the definitions file with the license keys that we want to register. We will then run a terraform plan and check what changes will be made. Once we are happy with the changes, we can run terraform apply and pass in the plan. The outputs that we requested will be shown at the end of a successful execution.

In conclusion, this tutorial has demonstrated how to apply one or more licenses to vCenter using Terraform. We have also created a module that can be used across multiple vSphere environments. The code for this tutorial can be found in the configure_licensing folder.