Effortless Remote State Management with Terraform Cloud

Configuring Terraform Cloud for Remote State Management

In this blog post, we will explore how to configure Terraform Cloud to remotely host our Terraform state file. We will use the free subscription, and assume that you have some Terraform config already set up. We will also use the results of a previous three-part blog post series, entitled “Terraform and vSphere”.

Creating an Account on Terraform Cloud

First, we need to create an account on the Terraform Cloud portal. Fill in the details for a new account and click “Create Account”. Register your account.

Next, select the option to Create an Organization. Create a new organization and specify a name for it (it must be unique to Terraform Cloud). Specify an email address for future correspondence to go to. Click “Create Organization”.

Specifying Organisation Details

We have now successfully registered our organization on the Terraform Cloud platform.

Generating a Token

While logged in to the Terraform Cloud portal, click the user icon at the top right and select User Settings from the dropdown menu. Navigate to User Settings. Select the Tokens option from the menu on the left. Enter a description for the token (e.g., “terraform_access_token”). Click “Generate Token”. A token will be generated for you and displayed. You must take a copy of this and store it carefully. If you lose it, you will have to delete it and generate a new one.

Storing the Token

On our development system where we have Terraform located, we need to create a file called “terraform.rc” that will contain our generated token. I prefer to keep this file in a folder called “cli_config” and locate it with my other Terraform files. Create a folder called “cli_config” and a blank text file inside called “terraform.rc”.

Configuring Terraform to Read the Token

To make sure that Terraform reads our configuration file, we need to tell Terraform where to locate it. Add a new environment variable as follows:

Environment Variable: TF_CONFIG_FILE_PATH

Value: cli_config/terraform.rc

Remember, you will need to close and reopen any command prompt windows for the new variable to take effect.

Open the “terraform.rc” file in a text editor (e.g., Visual Studio Code) and enter the following into the file:

Content of terraform.rc file

—————————

[your_organization_name]

Your token will be wrapped in double quotes. Save the file and close the editor.

Creating a Backend File

Navigate into the folder where your Terraform definition is (e.g., “deploy_datacenter”). Create a text file called “backend.tf” and open it in your text editor. This file will tell Terraform where to store the state, using the credential token we supplied in the “terraform.rc” file. We will enter the following details into the file:

Content of backend.tf file

————————-

provider “your_organization_name” {

region = “your_region”

}

backend “your_organization_name” {

token = “your_token”

}

Note: Make sure that the token is wrapped in double quotes.

Saving and Closing the Editor

Once you have entered the relevant information into the “backend.tf” file, save the file and close the text editor.

Initializing Terraform

From within your folder, execute “terraform init”. This will allow Terraform to read through the “backend.tf” file and “terraform.rc” file. Terraform will connect to Terraform Cloud and under your organization create a workspace. If all goes well, you should see a message indicating that Terraform has successfully initialized.

Viewing the State Record in Terraform Cloud

After running “terraform init”, if you look in Terraform Cloud in your workspace, you will find that we now have a stored state from the execution. Future changes in state will now be recorded as a new entry here.

Conclusion

In this blog post, we have explored how to configure Terraform Cloud for remote state management. We have covered creating an account on Terraform Cloud, generating a token, storing the token, configuring Terraform to read the token, creating a backend file, and initializing Terraform. By following these steps, you can now use Terraform Cloud to store and manage your infrastructure as code.

Terraforming vSphere

Applying Terraform Defintions in vSphere Environment – Part III

In the previous parts of this series, we set up Terraform and created a basic definition to create a virtual datacenter. In part two, we initialized the Terraform folder and produced a plan of the changes our definition will make. Now, it’s time to apply the change and see our new datacenter take shape in the vSphere inventory.

Applying the Definition

To apply our definition, we use the terraform apply command. This command takes the name of the plan file as an argument, which is named based on the resource being created. In our case, the plan file is named newDC.plan. Here’s the command:

terraform apply newDC.plan

When we execute this command, Terraform will process the plan and create the new datacenter in the vSphere inventory. We can see the progress of Terraform as it processes the plan, and once the command is complete, we can see that our new datacenter has been created.

Destroying the Resource

Now that we have applied our definition and created our new datacenter, we no longer need it. To clean up and avoid having an outdated state file, we will use Terraform to destroy the resource. To do this, we execute the following command:

terraform destroy -var “datacenter=our_new_datacenter”

When we run this command, Terraform will print out to the console what resources will be affected by the command. In our case, only the datacenter resource will be affected. To confirm the destruction, we type “yes” and press Enter.

Once the command is complete, we can see that the datacenter resource has been destroyed in the vSphere inventory.

Terraform State File

As mentioned earlier, Terraform cannot work without the state files. The state file allows Terraform to track the infrastructure it is affecting, recording the infrastructure components, their settings (configuration), and dependencies between each other. Without storing a state, Terraform would not know what to do or when to do it.

In this series of posts, we have been using the state file locally, but for testing and learning purposes, it is perfectly fine. In a future post, I will discuss how to properly manage your state files.

Conclusion

In this third part of our series on getting started with Terraform in a vSphere environment, we have applied our definition and created our new datacenter in the vSphere inventory. We have also destroyed the resource using Terraform, ensuring that our state file remains up-to-date.

Terraform is a powerful tool for managing your infrastructure as code, and with these posts, you should now have a good understanding of how to get started with Terraform in a vSphere environment. In the next post, we will discuss more advanced topics such as using modules and count-based resources.

Until then, happy automating!

Terraforming Your vSphere Environment

In part two of this series on getting started with Terraform in a vSphere environment, we will focus on initializing the Terraform folder and producing a plan of the changes our definition will make. Before we can execute our plan, we need to ensure that the required providers exist within our project structure.

To initiate the Terraform process, we open a command prompt and navigate into our `terraform/deploy_datacenter` folder. Once in the folder, we run the command `terraform init`. This command processes the files in the current folder, looking for references to providers. In our case, it will find a reference to the vSphere provider and download it directly from Hashicorp.

Once the provider is downloaded, we have everything we need to execute our first Terraform run and create a new datacenter in our vSphere inventory. However, it’s good practice to run a plan in advance to check what will happen before we apply the changes. To do this, we save the plan output so we can run the `apply` command later, passing in the plan file.

To generate the plan, we execute the command `terraform plan -var “datacenter=our_new_datacenter” -out=newDC.plan`. This tells Terraform to create a plan recording the changes that will take place, with the datacenter variable set to our_new_datacenter. The `-out` flag specifies the output file name as `newDC.plan`.

If we check the folder, we will now see the `newDC.plan` file. However, if we try to open this file in a text editor, we won’t be able to read it! Luckily, Terraform allows us to output the plan to JSON. To do this, we use the command `terraform show -json newDC.plan`.

This will output the plan as JSON, which we can redirect to a file (e.g., `terraform show -json newDC.plan > newDC.plan.json`) and then paste the contents into our favorite JSON viewer to make viewing easier.

Now that we have our plan file saved, we can apply the plan, creating our datacenter object. We will do this in part three of this series.

In conclusion, this blog post has covered the initializing of the Terraform folder and producing a plan of the changes our definition will make before applying them to create a new datacenter in our vSphere environment. In the next part of this series, we will apply the plan and create the datacenter object.

Mastering Terraform and vSphere

Getting Started with Terraform in a vSphere Environment – Part 1

In this series, we will explore how to use Terraform to automate the creation and configuration of a virtual datacenter in a vSphere environment. In part one, we will focus on setting up Terraform and creating the basic definitions for our infrastructure.

To follow along with this series, you will need to have the following already set up and configured:

* The basic folder structure for this series (as shown below)

* Terraform downloaded and installed on your system (available for Windows, Linux, and macOS)

* The Windows x64 version of Terraform will be used in this series

Once you have the basic setup complete, open a command prompt and navigate into the deploy_datacenter folder. Type “terraform version” and hit enter to verify that Terraform is installed correctly and to see the version you are running.

Next, we will create some stub files to define our infrastructure. We will create the following files, then edit each one in turn to provide connection details, credentials, and our infrastructure (defined in code):

* provider.tf

* provider_variables.tf

* main.tf

* variables.tf

In the provider.tf file, we will define the provider type we wish to use, along with the information that it requires. The provider_variables.tf file will declare variables that will be used to pass in our connection details and credentials. The main.tf file will tell Terraform what object we are going to manipulate (either create, amend or destroy). In this case, we will specify a ‘vsphere_datacenter‘ resource to create. The variables.tf file will describe the value we wish to pass into the main.tf file; in this case, the name of our datacenter.

Here is an overview of each file and what it contains:

* provider.tf: This file defines the provider type we wish to use and the information that it requires.

* provider_variables.tf: This file declares variables that will be used to pass in our connection details and credentials.

* main.tf: This file tells Terraform what object we are going to manipulate (either create, amend or destroy). In this case, we will specify a ‘vsphere_datacenter‘ resource to create.

* variables.tf: This file describes the value we wish to pass into the main.tf file; in this case, the name of our datacenter.

Once you have created all of the files, let’s start by editing our provider.tf file. The provider.tf file is going to define the provider type we wish to use, along with the information that it requires. You may be wondering where we are going to find this information out from. It just so happens that the documentation Terraform provides is very detailed and everything we need to know can be found here.

We will continue our journey in part two of this series, where we will generate a plan, apply the plan, and create our datacenter object. Stay tuned!

Note: This article was written by Paul Davey, CIO at Sonar, Automation Practice Lead at Xtravirt, and guitarist in The Waders. He loves IT, automation, programming, music, and is passionate about helping others learn and grow. Copyright AutomationPro 2018.

Terraform

Infrastructure as Code (IaC) is a powerful concept that has revolutionized the way we manage and provision infrastructure. Terraform is an open-source tool that enables us to use IaC to define and manage our infrastructure. In this blog post, we will explore the basics of Terraform and how it can help us automate and streamline our infrastructure management processes.

What is Terraform?

Terraform is an open-source tool that allows us to define and manage our infrastructure using IaC. It provides a simple and declarative way to describe our infrastructure, and it automatically provisions and updates the resources based on the definitions we provide. Terraform supports a wide range of cloud and on-premises infrastructure providers, including AWS, Azure, Google Cloud, and VMware vSphere.

How does Terraform work?

Terraform works by using IaC to define our infrastructure. We write our definitions in plain text files, which makes it easy to learn and use. These definition files describe the blueprint of our infrastructure, including the resources we need, their properties, and how they should be configured. Terraform reads these definition files and creates a plan that outlines all the changes that need to be made to our environment. We can then apply the plan to provision or update our infrastructure.

Terraform provides several benefits over traditional infrastructure management methods. It allows us to define our infrastructure in a declarative way, which makes it easy to understand and maintain. It also enables us to version control our configurations, which makes it easier to track changes and roll back to previous versions if needed. Additionally, Terraform provides a graph that outlines all the components and settings in our plan, which allows it to map dependencies and execute as much of the plan as possible in parallel. This ensures that infrastructure changes can be carried out quickly and efficiently.

What are the key features of Terraform?

Some of the key features of Terraform include:

1. Infrastructure as Code (IaC): Terraform allows us to define our infrastructure using IaC, which makes it easy to manage and version control our configurations.

2. Declarative configuration: Terraform provides a simple and declarative way to describe our infrastructure, which makes it easy to understand and maintain.

3. Automated provisioning and updates: Terraform automatically provisions and updates our infrastructure based on the definitions we provide, which saves time and reduces errors.

4. Version control: Terraform allows us to version control our configurations, which makes it easier to track changes and roll back to previous versions if needed.

5. Parallel execution: Terraform creates a graph that outlines all the components and settings in our plan, which allows it to map dependencies and execute as much of the plan as possible in parallel.

6. Provider integration: Terraform supports a wide range of providers, including AWS, Azure, Google Cloud, and VMware vSphere. This enables us to use the same tool to manage multiple environments.

7. Custom provider development: If we need a provider that is not available, we can write our own to bridge the gap.

Conclusion

In conclusion, Terraform is a powerful tool that enables us to use IaC to define and manage our infrastructure. It provides a simple and declarative way to describe our infrastructure, and it automatically provisions and updates the resources based on the definitions we provide. With its support for multiple providers and custom provider development, Terraform is a versatile tool that can be used to manage a wide range of environments. In future posts, we will explore how to use Terraform in a VMware vSphere environment.

Unlocking the Power of Infrastructure As Code

Infrastructure as Code (IaC) is a practice that involves managing and provisioning infrastructure resources such as virtual machines, networks, and storage through code and configuration files, rather than through manual processes. This approach provides several benefits, including:

1. Version Control: IaC allows you to manage your infrastructure configurations in version control systems like Git, which enables collaboration, tracking changes, and rolling back to previous versions if needed.

2. Consistency: By defining your infrastructure as code, you can ensure consistency across different environments and deployments, which helps to reduce errors and improve reproducibility.

3. Reusability: IaC content can be reused across different environments and applications, which saves time and effort compared to manual configuration.

4. Faster Deployment: With IaC, you can automate the deployment of your infrastructure, which speeds up the process and reduces the risk of human error.

5. Improved Security: By defining security policies and access controls in code, you can ensure that your infrastructure is secure and compliant with regulatory requirements.

6. Better Governance: IaC provides centralized control over development, testing, and release of your infrastructure, which improves governance and reduces the risk of unauthorized changes.

7. Reduced Downtime: With IaC, you can quickly recover from outages by redeploying your infrastructure, which minimizes downtime and improves availability.

8. Improved Collaboration: IaC enables IT teams to collaborate more effectively by providing a common language and set of tools for managing infrastructure.

9. Cost Savings: By reducing manual effort and improving efficiency, IaC can help you save costs compared to traditional manual configuration methods.

However, adopting IaC also requires some investment in terms of learning the technology and changing business processes. Additionally, there may be an initial cost associated with licensing commercial versions of IaC solutions. Despite these factors, the benefits of IaC far outweigh the negatives, making it a worthwhile investment for organizations looking to improve their infrastructure management practices.

vRA

vRA, vRO, and the Mystery of the Missing VMs

As an IT professional, I’ve dealt with my fair share of unexpected issues and head-scratching problems. But last week, I encountered something that really had me stumped. A customer had an environment consisting of vSphere, vRA, and vRO, with the usual suspects of IAAS roles duplicated and sitting behind a load balancer.

While rewriting some vRO workflows and adjusting blueprints for them, I noticed something strange happening. When I destroyed test VMs through vRA, they weren’t always deleted from the vSphere inventory. Sometimes, the VMs would be moved to a folder in the inventory with the current date and time stamp appended as a suffix to the VM name.

I have to admit, this baffled me. I spent some time digging into the issue, only to find that the solution was quite simple once I knew what to look for. The customer had multiple Windows boxes with the IAAS/Web/Dem roles separated across them, and they had this setup duplicated and sitting behind their load balancer. The setting doDeletes had been configured on just one of the IAAS role boxes.

So, depending on which box serviced the request, the VM either got deleted or got moved. It was a simple fix, but it definitely caused some temporary head-scratching!

This experience reminded me of the importance of thoroughly reviewing and understanding all aspects of the environment before implementing any changes. It’s easy to overlook seemingly minor details, only to have them cause major issues down the line.

In this case, the solution was straightforward once I knew what to look for. But, it could have easily been a more complex issue that required a lot more time and effort to resolve.

I hope that by sharing this experience, others can learn from my mistake and avoid similar head-scratching situations in their own environments. As always, thorough planning and testing before implementing any changes is essential to ensure a smooth and successful outcome.

Paul Davey is CIO at Sonar, Automation Practice Lead at Xtravirt, and guitarist in The Waders. He loves IT, automation, programming, and music.

Unleashing the Power of LogManager in vRO

Logging in VMware vRO Just Got a Whole Lot Easier!

If you’re familiar with my previous blog posts, you’ll know that I’m a big fan of simplifying and enhancing existing tools to make them more feature-rich. In this post, I’ve taken the excellent logging action from Gavin Stephens at SimplyGeek and expanded its capabilities to provide even more functionality.

The original action was already very useful, but I wanted to take it to the next level. So, I rewrote the bulk of the action to include the following features:

* The ability to pass the log attribute as an input into each workflow component (scripts, sub-workflows, actions, etc.) and set it as an output for the component. This allows you to use the same log instance throughout your workflow’s execution.

* Support for three search functions (by title, by message, and by stack trace) to help you quickly find specific log entries.

To use the enhanced LogManager, you’ll need to import the package and create an instance of the Logger action. Then, you can start creating log entries using the LogManager instance. Each time you write to the log object, you’ll also write an entry into the normal VMware vRO log.

Here’s an example workflow that demonstrates how to use the enhanced LogManager:

“`json

{

“name”: “Example Workflow”,

“description”: “A simple workflow that demonstrates the use of the LogManager action.”,

“version”: 1,

“inputs”: {

“log”: {

“type”: “Any”

}

},

“outputs”: {

“log”: {

“type”: “Any”

}

},

“tasks”: [

{

“name”: “Write Log”,

“action”: “LogManager”,

“inputs”: {

“log”: “log”

},

“outputs”: {

“log”: “log”

},

“script”: {

“language”: “javascript”,

“content”: “log.write(‘Hello, World!’);”

}

}

]

}

“`

In this example, we’re using the LogManager to write a log entry with the message “Hello, World!” Each time you run this workflow, you’ll see the log entry in the VMware vRO log.

To view the full contents of the ‘log’ object, simply output it. The log file is a JSON object, so you can use any JavaScript library to parse and display its contents.

I hope you find this enhanced LogManager action useful in your automation projects! If you have any questions or feedback, please don’t hesitate to reach out. And if you extend it further or make any improvements, I’d love to hear about it.

Download the LogManager Package Here

Happy automating!

Revolutionizing Mobile Development

Introducing MockSmart: A Smart and Dynamic Mock RESTful API Solution

As an automation expert, I often encounter situations where I need to test and validate workflows and software applications against various RESTful APIs. However, in many cases, the APIs are not readily available for testing, either because they are still under development or because the clients do not have a replica of their live environment for testing purposes. In such scenarios, I found myself having to develop solutions quickly, often with tight deadlines and without access to the actual API endpoints.

To address these challenges, I decided to create my own mock server solution that would allow me to easily mock RESTful APIs and test my workflows and applications against them. However, I wanted this solution to be more dynamic and flexible than the existing solutions available in the market. I wanted to be able to specify the response I wanted to receive back from my mocked system as an entry in the request header, along with the HTTP status code (for example, 200, 403, 404, etc.). This approach would allow me to create complex scenarios with timeouts and other behaviors that are not easily achievable with traditional mocking solutions.

I called this solution MockSmart, as it is designed to be smart and dynamic in its behavior. With MockSmart, you can quickly mock any RESTful API and consume it from any of your favorite automation products or development environments. As long as it can make REST requests and handle REST responses, MockSmart is for you.

MockSmart is built using Visual Studio, C#, and ASP.Net running on .Net Core, with Ubuntu as the underlying operating system. This solution is free, and there is no official support, but I will try to help if you have any issues. You can reach out to me at bugs.mocksmart@automationpro.co.uk for any questions or issues.

I plan to release several packs for different use cases over the coming months, starting with a pack for VMware vRO, which will demonstrate how to use the appliance when developing workflows. In the meantime, feel free to try out MockSmart and explore its capabilities. With this solution, you can mock any RESTful API and test your workflows and applications against it.

In conclusion, MockSmart is a simple yet powerful tool that enables you to mock RESTful APIs with ease. Its dynamic behavior and ability to specify responses in the request header make it a versatile solution for automation testing and development. Give it a try today and see how it can help you streamline your workflows and improve your development efficiency.

Ansible Lab Setup – Mastering Sample Files, Configuration Files, and Multiple Test Nodes (Part 3)

Ansible Lab: Customizing the Ansible Configuration File

In our previous Ansible lab posts, we covered the basics of installing Ansible and creating inventory files for our machines. Today, we’ll dive deeper into customizing the Ansible configuration file to make our automation tasks more efficient.

By default, when you install Ansible, it provides inventory and configuration file templates for you. Let’s take a look at the contents of the ansible.config file located in /etc/ansible on the Ansible server. We won’t go through the contents of the ansible.config or hosts files as they are well-documented inline and online by Ansible. Instead, we’ll create a simple configuration file to get us started.

Renaming the Ansible Configuration File

We can rename the ansible.config file to create our own version of it. For this example, let’s call our new configuration file ansible-custom.config. Using nano, we’ll edit the newly created file and insert the following:

Inventory Entry

—————-

As you can see, we’ve added an inventory entry pointing to the inventory file we created earlier. This way, we won’t have to specify the -i each time we use the Ansible CLI.

Adding Additional Configuration Options

We’ve also specified the root user as the user to use for sudo operations, which provides better visibility and sets a timeout value for SSH connections. Here’s the edited ansible-custom.config file:

Testing the Ansible Configuration File

To test our new configuration file, we can run the following command:

“`css

ansible-playbook -i /path/to/inventory/file

“`

As you can see, we specified the inventory file location, and the command worked as expected. We then re-ran the command without specifying the inventory file, and Ansible successfully read the inventory file path from the ansible-custom.config file we modified.

Overriding the Inventory File on the Command Line

Why do we need to specify the inventory file path in the configuration file if we can override it on the command line? The answer is simple: flexibility. We may have another inventory file we want to use for testing or specific tasks, and by specifying it on the command line, we can quickly switch between different inventory files without modifying the configuration file each time.

Conclusion

———-

In this post, we’ve explored how to customize the Ansible configuration file to make our automation tasks more efficient. By creating a simple configuration file and specifying an inventory entry, we can avoid having to specify the -i each time we use the Ansible CLI. Additionally, we learned how to override the inventory file on the command line, providing us with flexibility in our automation tasks.

As always, thanks for reading, and we’ll see you in the next post!