VMworld 2019

Five Things I Learned at My First VMworld

As someone who has been in the virtualization industry for a while, I have always heard great things about VMworld, but it wasn’t until my first time attending in 2012 that I truly understood the power of this event. Here are five things I learned during my first VMworld that have had a lasting impact on my career and personal brand:

Networking is key – Before attending VMworld, I thought conferences were all about listening to keynotes and sitting in sessions. But what I quickly realized was that the real value of the event came from the people I met. I made connections with fellow attendees, vendors, and VMware employees that have been instrumental in helping me grow my career and brand.

Be open to new experiences – As a virtualization professional, I had never been to a conference this large before, so everything was new and exciting. From the bright colors and psychedelic decorations on the show floor to the exclusive Veeam event, I was open to trying new things and stepping out of my comfort zone.

Set clear goals – With so many sessions and activities to choose from, it’s important to have clear goals for what you want to achieve at VMworld. Whether it’s learning about new technologies, meeting certain people, or simply soaking in the atmosphere, having a plan will help you make the most of your time there.

The show floor is not to be missed – Sure, the keynotes and sessions are important, but the show floor is where the magic happens. It’s where you can see the latest technologies up close and personal, get hands-on experience with products, and talk to vendors and other attendees about their experiences.

VMworld begins with you – The tagline for VMworld is “Right Here Right Now,” and that’s exactly what it felt like. It was a place where I could be myself, learn from others, and grow my personal brand. Looking back, attending VMworld was a pivotal moment in my career, and it continues to be an event that shapes my goals and aspirations every year.

A Journey Through Innovation

Demystifying the World of Virtualization and Technology: Thoughts from Dell Technologies World 2018

As I reflect on my recent trip to Dell Technologies World 2018, I am reminded of the transformative power of technology and the impact it has on our lives. This year’s event was a unique experience that offered a glimpse into the future of technology and its potential to shape our world. As someone who is passionate about virtualization and technology, I felt right at home among the visionaries and thought leaders who gathered in Las Vegas to explore the latest advancements in the field.

One of the most striking aspects of this year’s event was the focus on customer transformation. Gone were the days of product-centric talks and presentations; instead, the emphasis was on how technology can be leveraged to drive real change and innovation within our organizations. The keynotes were a testament to this shift in focus, with thought leaders from diverse industries sharing their stories of how they are using technology to reimagine their businesses and push the boundaries of what is possible.

The Social Village was another highlight of the event. This innovative space provided attendees with an opportunity to relax, network, and engage in some truly unique experiences. From The Cube interviews to drone racing with VR headsets, there was no shortage of activities that allowed us to explore the cutting-edge technology on display. The expo floor was equally impressive, with a diverse range of vendors showcasing their latest offerings and providing attendees with a truly comprehensive view of the industry.

Of course, no Dell Technologies World event would be complete without some major announcements, and this year did not disappoint. The introduction of the next-generation all-flash array – The Powermax – was a standout moment for me. This powerhouse of a storage array represents a significant leap forward in terms of performance, scalability, and efficiency, and I am excited to see how it will impact the industry in the months and years to come.

One aspect of this year’s event that I found particularly noteworthy was the lack of a traditional community champion track. While this may seem like a departure from previous events, I believe that it was a deliberate choice to focus on the broader community and encourage attendees to engage with one another in new and meaningful ways. The UK customers who attended the event were some of the most passionate and engaged individuals I have ever met, and it was inspiring to see them sharing their knowledge and experiences with one another.

As I reflect on my time at Dell Technologies World 2018, I am reminded of the power of community and the importance of empowering our users to connect with one another. The bonds that we form within this industry are crucial to our success, and I believe that events like this one play a vital role in fostering those connections. Long may it continue!

In conclusion, Dell Technologies World 2018 was an experience that I will not soon forget. From the thought-provoking keynotes to the innovative exhibits on the expo floor, this event truly had something for everyone. As we look towards the future of technology and virtualization, I am excited to see what new developments and announcements will come our way in the months and years to come. Here’s to 2019 and beyond!

Unlocking Virtualization for Kubernetes with Platform9 KubeVirt

Platform9 KubeVirt: A Hands-on Lab Experience

As an automation guy with a love for containers, I was excited to try out Platform9’s KubeVirt implementation in their hands-on lab (HOL). After using Harvester for running VMs mainly for deploying Rancher RKE clusters, I was eager to see how Platform9 compared. In this blog post, I will share my experience with the platform and highlight the differences between it and Harvester.

Getting Started with Platform9

To get started with Platform9, you need to create a cluster using pf9ctl, their command-line tool. The process is straightforward, and you can follow the instructions in the official documentation. For my HOL, I created a K8s cluster with one Master node and one Worker node. The pre-node option for pf9ctl installs an agent and promotes the server to a PMK (Platform9 Managed Kubernetes) node that can be used to build a cluster.

Installing KubeVirt

Platform9 provides KubeVirt as an add-on, which can be installed with just one click. From the Infrastructure menu, select Clusters -> Managed, and a list of managed clusters will appear. Select the cluster intended for KubeVirt, and in the Platform9 KubeVirt documentation, you’ll find the details of the steps for installing KubeVirt on a new cluster. However, if your cluster already exists, the add-on can be added without issues.

Similarities and Differences with Harvester

There are some similarities between Platform9 and Harvester, but there are also significant differences. One of the main differences is that Platform9 keeps its offering very close to the upstream project, which means that you are more familiar with the solution, and when it’s time to move to another KubeVirt offering, the changes will be minimal. In contrast, Harvester offers a more curated experience but with less flexibility than Platform9.

Managing VMs

In the Platform9 KubeVirt documentation, you can find a lot of information about managing VMs. There are three areas of interest in the Virtual Machines section: All VMs, Live Migrations, and Instance Types. In the All VMs area, you can easily see the total, running or the VMs being migrated. In the Virtual Machine creation process, you can select the desired options for your VM, and the YAML syntax will start updating itself! This is a great feature that allows you to learn how to do the YAML version of the VM creation process and maybe run some CI/CD and automate VMs.

Upgrading the Cluster

While creating our cluster, we selected an older version of Kubernetes, and the idea is to be able to run an upgrade and see how things are handled for our VMs. To upgrade the cluster, select Infrastructure -> Clusters -> Managed, and select the cluster that will be upgraded. The steps for the upgrade are very similar to the initial install. During the upgrade, I noticed that the VMs first were moved to the Worker node, which is expected, as the first nodes to upgrade on K8s are the Master nodes.

Conclusion and Future Plans

In conclusion, Platform9’s KubeVirt implementation provides a hands-on lab experience that is different from Harvester in several ways. While both platforms offer similar functionality, Platform9’s closer alignment with upstream projects and its flexibility make it an attractive option for those looking for a more customizable solution. In my next blog post, I plan to dive deeper into the storage and networking aspects of Platform9 KubeVirt and compare them to Harvester’s offerings. Additionally, I will try to get my hands on PMK access to build a cluster in my homelab and test more stuff related to MetalLB, which looks like an interesting feature! Stay tuned for more updates!

Effortlessly Create Linux VMs with Harvester HCI

Deploying a Linux VM on Harvester: Easy as Expected

In my previous article, we explored how to integrate Harvester into the Rancher UI and create a new K8s cluster with just a few clicks. Today, we’re going to take it up a notch and see how fast we can deploy a Linux VM using Harvester. Spoiler alert: it’s easier than expected!

To get started, you’ll need an img or qcow2 file of your preferred Linux distribution. I’m using Ubuntu in this example, so I’ll be importing the latest version from cloud-images.ubuntu.com. Once you have your image file ready, follow these steps:

1. Navigate to the Images tab in Harvester and click on “Create.”

2. Select “Import Image” and choose the image file you prepared earlier.

3. Wait for the import to complete, and once it does, you’ll see your new image appear under the “Images” tab.

4. Now, navigate to the Virtual Machines tab and click on “Create.”

5. Select “Linux” as the type and choose the newly imported image. You can also assign a name and description for your VM if desired.

6. Click on “Advance Options” to customize your VM further. Here’s where things get interesting!

7. Harvester offers a feature called “Namespace,” which is inspired by Kubernetes. With Namespace, you can logically separate your VM from other projects or owners, creating a more organized and secure environment.

8. Once you’ve configured your VM settings, click on “Create” to deploy your new Linux VM.

9. Finally, you can interact with your new VM using the console interface, just like any other virtual machine platform. The IP address assigned is from a DHCP network outside the Harvester environment, which allows for easy configuration and management of your VM.

As you can see, deploying a Linux VM on Harvester is incredibly straightforward. In fact, it’s almost too easy! With just a few clicks, you can have a fully functional virtual machine up and running, complete with customizable settings and the ability to separate your VM from other projects. Of course, this is just the beginning – we’ll be exploring more advanced features like Windows VM creation and ISO image import in future articles. Stay tuned!

So there you have it, folks! Harvester makes deploying a Linux VM a breeze, and with Namespace, you can take your virtualization game to the next level. Don’t forget to check out my previous article on integrating Harvester into Rancher UI for more information on how to get started with this powerful platform. Until next time, stay automated and keep on containerizing!

Ceph as a Storage Provider on Proxmox

Ceph: My Storage Solution of Choice

As a DevOps and virtualization enthusiast, I’ve been exploring various storage solutions for my projects. Recently, I discovered Ceph, an open-source distributed object store that has captured my interest. In this blog post, I’ll share my experience with Ceph, its benefits, and how to deploy it on Proxmox.

Why Ceph?

I’ve always been fascinated by distributed systems, and Ceph fits the bill. It allows me to have multiple machines working together as a single storage cluster, providing excellent performance and scalability. With Ceph, I can easily add more machines to my cluster as needed, making it an ideal solution for projects with growing storage needs.

Moreover, Ceph is designed to be highly fault-tolerant, meaning that even if one or more machines in the cluster fail, the data remains accessible and usable. This is particularly useful in environments where hardware failures are common or expected.

Deploying Ceph on Proxmox

Proxmox VE is a hypervisor that supports Ceph out of the box. Deploying Ceph on Proxmox is a straightforward process that can be completed in just a few clicks. The Proxmox documentation provides detailed instructions on how to set up a Ceph cluster, which I followed to deploy my own Ceph cluster.

My Experience with Ceph

I started by setting up a two-node Ceph cluster with Proxmox. At first, the state of Ceph was faulty, and the crush_map created by Proxmox was a 3-host configuration, which added at least one OSD to the cluster. Once I added a third node to the cluster, it started replicating data across all OSDs to meet the crush_map policy.

Here’s what the PGs looked like as they were being moved across the OSDs:

[insert image]

One thing I noticed about the storage usage on Proxmox is that thin provisioning is not similar to VMware VMFS. The thin provisioning depends on the backend and the format of the virtual drive, which took some getting used to. However, once I understood how it worked, I was able to configure my storage effectively.

This is the current state of the storage side of my Proxmox cluster:

[insert image]

As you can see, I have two nodes with a total of four OSDs, providing plenty of storage space for my VMs. I plan to move more VMs into this storage and see how Ceph performs under heavy I/O demand.

Hardware Used in the Cluster

I’ve documented the hardware used in my Ceph cluster on my website. The hardware includes two servers with Intel Xeon E5-2630 v4 processors, 128 GB of RAM, and 4 x 1 TB SSDs for the OSDs. I also have a third server with an Intel Xeon E5-2630 v4 processor, 64 GB of RAM, and 2 x 2 TB NVMe SSDs for the client.

Conclusion

Ceph has been an excellent choice for my storage needs. Its distributed architecture, fault tolerance, and scalability make it an ideal solution for projects with growing storage demands. Deploying Ceph on Proxmox is straightforward, and the resulting cluster provides high performance and reliability. I’m excited to continue exploring the capabilities of Ceph and see how it performs under heavy I/O demand.

Unleashing Ceph

Sure, here is a 500-word blog post based on the information provided:

As an automation guy with a love for containers, I’m always looking for ways to improve my homelab setup. Recently, I decided to experiment with Ceph as a storage solution, but I quickly ran into a problem – the Ceph documentation suggests that you need at least 3-4 host to achieve decent performance. This is a bit of an issue for me, as I can only afford to run three machines in my homelab.

Despite this limitation, I was determined to make Ceph work for me. After some research, I discovered that Proxmox VE, an open-source virtualization platform, supports Ceph as a storage solution. This was exactly what I needed – a way to use Ceph with only three machines.

I recently installed Proxmox on my second machine, and I’m excited to report that it has been working flawlessly. The installation process was surprisingly easy, and the web interface is intuitive and user-friendly. With Proxmox, I can manage all of my virtual machines (VMs), including those running Ceph.

One of the things I love about Proxmox is its support for containers. As an automation guy, I’m always looking for ways to simplify my workflow and increase efficiency. Containers are a great way to do this – they allow me to package up my application and its dependencies into a single, portable unit. This makes it easy to deploy and manage my applications across different environments.

With Proxmox, I can easily create and manage containers for my Ceph cluster. For example, I can use Docker to create a container that runs the Ceph client software, and then use Proxmox to manage that container. This allows me to keep all of my Ceph-related components in a single, isolated environment, which makes it easier to troubleshoot issues and maintain security.

Another benefit of using Proxmox with Ceph is the ability to easily scale my storage capacity. With Ceph, I can add new machines to my cluster as needed, and Proxmox will automatically recognize and incorporate them into my storage pool. This means that I can easily expand my storage capacity as my needs grow, without having to worry about complex configuration changes or downtime.

Overall, I’m really happy with how well Proxmox has worked out for me in my homelab. It has given me a powerful and flexible platform for managing my Ceph cluster, and it has simplified the process of working with containers. If you’re looking for a solid virtualization solution that supports Ceph and containers, I highly recommend giving Proxmox a try.

As an automation guy with a love for containers, I’m always on the lookout for new and innovative solutions to improve my homelab setup. With Proxmox and Ceph, I’ve found a powerful and flexible combination that has helped me streamline my workflow and increase efficiency. Whether you’re a fellow automation enthusiast or just looking for a better way to manage your storage, I hope this blog post has been helpful and informative. Thanks for reading!

Ceph as My Storage Provider? – Ariel’s Weblog

Ceph: The Future of Storage or Overhyped Technology?

As I delve into the world of Ceph, a highly scalable and intelligent storage system, I can’t help but wonder if it’s truly the future of storage or just an overhyped technology. The official definition from Ceph’s website states that it supports object, block, and file storage in one unified storage system, leaving me with more questions than answers. In this blog post, I’ll share my experience planning to install and configure Ceph in a 3-node cluster using Proxmox UI, and discuss the challenges I faced with storage devices.

My Journey with Ceph

I started planning to install and configure Ceph in a 3-node cluster a few weeks ago. Everything was done via Proxmox UI, which made the process relatively easy. However, one of the main issues I faced was the storage devices. It doesn’t like Consumer SSD/Disks/NVME, which was a major challenge for me.

I have a pair of 970 EVO Plus (1TB) that were working fine with vSAN ESA, but I decided to move to Intel Enterprise NVMe because there is a lot of information around the web pointing to bad performance with this type of NVMe. The Supermicro machine is already running Proxmox, so I thought it was time to take the Ceph adventure to the next level.

Challenges with Storage Devices

One of the biggest challenges I faced during my journey with Ceph was finding suitable storage devices. The official documentation states that Ceph supports object, block, and file storage in one unified storage system, but it doesn’t specify the type of storage devices required. This lack of clarity led me to spend hours researching and experimenting with different storage devices before I finally found a solution that worked for me.

I initially used Consumer SSD/Disks/NVME, which resulted in poor performance and stability issues. After researching further, I discovered that Intel Enterprise NVMe is the way to go when it comes to Ceph storage. This was a game-changer for me, as I was able to achieve better performance and stability with my Ceph cluster.

Conclusion

In conclusion, my experience with Ceph has been both challenging and rewarding. While the official documentation could be more specific about the type of storage devices required, I found that Intel Enterprise NVMe is the way to go for optimal performance and stability. With Ceph, you can achieve operational excellence through scalable, intelligent, reliable, and highly available storage software.

Whether Ceph is the future of storage or just an overhyped technology remains to be seen. However, based on my experience so far, I believe that Ceph has the potential to revolutionize the way we think about storage in the future. With its ability to support object, block, and file storage in one unified storage system, Ceph is definitely a technology worth exploring further.

Unleashing KubeVirt

Sure, here’s the 500-word blog post based on the provided information:

Hey there, folks! It’s your friendly neighborhood Automation Guy here, and today I want to talk about something that might be a game-changer for those of us who love containers. You know how we’ve been using Kubernetes (K8s) to manage our containerized apps for the past few years? Well, it looks like there’s a new kid on the block that could potentially disrupt the status quo: KubeVirt.

Now, I know what you’re thinking: “Ariel, haven’t we been using VMware for years to manage our virtual machines?” And you’re right! But here’s the thing: KubeVirt is a new player in the game that promises to deliver the same level of control and flexibility as K8s, but for virtual machines. And let me tell you, it’s been making some serious waves in the industry.

So, why should we care about KubeVirt? Well, for starters, it’s open-source, which means that it’s free to use and customize however we want. And if you’re coming from a VMware background like me, you know how important it is to have a centralized management platform that can handle both containers and virtual machines. KubeVirt offers just that: a single pane of glass for managing all your workloads, whether they’re running on bare metal, virtual machines, or containers.

But here’s the thing: KubeVirt isn’t just a VMware clone. Oh no, it’s so much more than that! It’s a highly scalable, distributed platform that can handle some serious workloads. And the best part? It’s designed to be easy to use and integrate with existing K8s clusters.

Now, I know some of you might be thinking: “But Ariel, I love Harvester! It’s so easy to use and it integrates perfectly with Rancher.” And you know what? You’re right again! Harvester is an amazing tool that makes it easy to manage your virtual machines. But here’s the thing: it’s also a resource hog, and if you’re running it on the same host as your containers, you might find that it’s just too much for your system to handle.

That’s where KubeVirt comes in. It offers the same level of ease of use as Harvester, but without the resource intensity. And with support for features like network policies and SELinux, it’s a serious contender for those looking to manage their virtual machines in a more container-like way.

So, what’s my takeaway from all this? Well, I think it’s time to start exploring KubeVirt as an alternative to Harvester and VMware. It might not be the perfect solution for everyone, but it’s definitely worth checking out if you’re looking for a more streamlined, container-like approach to managing your virtual machines.

And hey, who knows? Maybe one day we’ll see Platform9 and KubeVirt duking it out in the virtual machine management space! (I’m looking at you, Platform9!) But until then, I’m gonna keep experimenting with KubeVirt and seeing just how far it can take me.

Wish me luck, folks! It’s time to see what this new kid on the block has to offer. And who knows? Maybe one day we’ll all be running our virtual machines inside containers!

Transforming CloudBuilder Excel Files to JSON

Convertir un archivo Excel en JSON para automatizar la creación de un entorno SDDC en CloudBuilder

Como DevOps eng, I always look for ways to automate processes and improve efficiency. Recently, I encountered the need to automate the creation of a Software-Defined Data Center (SDDC) using VMware Cloud Foundation (VCF). While Excel’s Deployment Parameter Workbook is a valuable tool for parameterizing the environment, I wanted to explore the possibility of converting it to JSON for easier automation.

In this blog post, I will discuss how to convert an Excel file to JSON using CloudBuilder’s SoS Utility and how to use Ansible to automate the creation of an SDDC based on the resulting JSON file.

Why Convert Excel to JSON?

—————————-

There are several reasons why converting Excel to JSON can be beneficial for automating the creation of an SDDC:

1. **Easier automation**: JSON is a lightweight, human-readable format that can be easily parsed and processed by machines. This makes it an ideal choice for automation scripts.

2. **Flexibility**: By converting the Excel file to JSON, we can easily modify the values and parameters without having to manually edit the Excel file.

3. **Reusability**: Once we have converted the Excel file to JSON, we can reuse the resulting file in other automation scripts or tools.

How to Convert Excel to JSON Using CloudBuilder’s SoS Utility?

——————————————————————

To convert an Excel file to JSON using CloudBuilder’s SoS Utility, follow these steps:

1. **Place the Excel file in the home directory of the user**: Use WinSCP or scp to place the Excel file in the home directory of the user who will be running the SoS Utility.

2. **Run the SoS Utility**: Open a terminal or command prompt and run the following command to convert the Excel file to JSON:

“`bash

sosexport -c -o

“`

Replace `` with the path to your Excel file, and `` with the desired output path for the JSON file.

For example, if your Excel file is located at `/home/user/Documents/deployment-parameters.xlsx`, you can run the following command:

“`bash

sosexport -c /home/user/Documents/deployment-parameters.xlsx -o /home/user/deployment-parameters.json

“`

This will create a JSON file named `deployment-parameters.json` in the home directory of the user.

Tips and Tricks for Automating SDDC Creation with Ansible

——————————————————–

Once we have converted the Excel file to JSON, we can use Ansible to automate the creation of an SDDC based on the resulting JSON file. Here are some tips and tricks to keep in mind:

1. **Use cURL**: Instead of using the Web UI of CloudBuilder, we can use cURL commands to interact with the API. This can be faster and more efficient, especially when dealing with large environments.

2. **Validate the creation process**: After creating an SDDC, it’s essential to validate that the creation process was successful. We can do this by checking the execution status of the API call and ensuring that the result status is `SUCCEEDED`.

3. **Use Ansible modules**: Instead of using shell commands, we can use Ansible modules to simplify our playbook and make it more readable. For example, we can use the `ansible-vmware` module to interact with CloudBuilder’s API.

4. **Reuse the JSON file**: Once we have created the JSON file, we can reuse it in other automation scripts or tools. This can save us time and effort when creating additional SDDCs or modifying existing ones.

Conclusion

———-

In this blog post, we explored how to convert an Excel file to JSON using CloudBuilder’s SoS Utility and how to use Ansible to automate the creation of an SDDC based on the resulting JSON file. By converting the Excel file to JSON, we can simplify the automation process and make it more efficient. Additionally, by using Ansible modules and cURL commands, we can streamline our playbook and improve its readability.

Streamline Your Development Workflow with GNU Stow

Greetings, my fellow tech enthusiasts! Today, I’d like to share with you a powerful tool that has revolutionized the way I manage my dotfiles. If you’re tired of manually maintaining your customizations across different machines, then you’re in luck because I’m here to introduce you to GNU Stow.

But before we dive into the wonders of Stow, let me first explain why we need such a tool. As tech enthusiasts, we often find ourselves working on multiple machines, be it laptops, desktops, or servers. And when we switch between these machines, we tend to lose our customizations, such as aliases, plugins, and themes. It’s frustrating, right? Well, that’s where Stow comes in.

Stow is a symlink manager that allows us to manage our dotfiles across different machines. With Stow, we can easily replicate our customizations across all our devices, making our workflow smoother and more efficient. So, let me show you how to get started with Stow.

First things first, we need to install Oh My ZSH! (OMZ) on our machines. OMZ is a framework that manages our zsh configurations, including prompts, plugins, and themes. It’s incredibly easy to install, just run the following command in your terminal:

`git clone https://github.com/ohmyzsh/ohmyzsh.sh`

Once you’ve installed OMZ, you can start exploring its vast collection of plugins. These plugins are what make zsh so powerful and customizable. Trust me, you won’t be disappointed!

Now that we have OMZ set up, let’s talk about how to use Stow. The process is surprisingly straightforward. First, we need to create a directory for our dotfiles:

`mkdir -p ~/.dotfiles`

Next, we need to create subdirectories within the dotfiles directory for each application (package) we want to manage with Stow. For example, if we want to manage our Git configurations, we would create a subdirectory called `git`:

`mkdir ~/.dotfiles/git`

Inside each subdirectory, we place the configuration files for that particular application. For instance, in our Git subdirectory, we would place our Git configuration file:

`touch ~/.dotfiles/git/config`

Now, let’s activate Stow. To do this, we run the following command:

`stow -C`

This command tells Stow to create a symlink for each subdirectory within our dotfiles directory. And just like that, we have replicated our customizations across all our machines!

But wait, there’s more! We can also use Git to manage our dotfiles. This way, we can easily replicate our customizations across all our machines by simply cloning our Git repository. Here’s how:

1. First, create a new Git repository for your dotfiles:

`git init ~/.dotfiles`

2. Next, add your dotfiles to the repository:

`git add ~/.dotfiles`

3. Finally, commit and push your changes to your remote repository:

`git commit -m “Initial commit of dotfiles”`

`git push origin master`

Now, when you switch to a new machine, all you need to do is clone your Git repository to get access to your customizations. It’s that simple!

In conclusion, GNU Stow has been a game-changer for me and my workflow. With its ability to manage my dotfiles across multiple machines, I can focus on more important things, like automation and containerization (shameless plug alert!). So, if you haven’t already, give Stow a try and experience the power of symlink management for yourself. Happy hacking!