Unlocking the Power of vCenter Profiles in vSphere 7

Agustín Malanco es un VCDX #141 con una nueva serie de artículos sobre las nuevas capacidades que se nos entregaron con vSphere 7. En este primer artículo, se centra en la creación de perfiles de configuración para nuestras instancias de vCenter con vCenter Profiles.

Desde hace mucho tiempo, los clientes de VMware han tenido peticiones para poder estandarizar y tener un método de configuración declarativo para vCenter Server, logrando tener claro el estado final de configuración entre las instancias de vCenter. Con vCenter Profiles, podemos exportar la configuración existente de vCenter y cubrir los siguientes casos de uso:

1. Crear un perfil básico para pruebas

2. Crear un perfil a partir de la configuración actual

3. Verificar la configuración actual vs el perfil creado

Para interactuar con vCenter Profiles, podemos usar dos métodos:

1. GUI de vCenter Server ( Development Center > API Explorer)

2. API de REST ( endpoint y API de Appliance)

Para crear un perfil básico para pruebas, podemos buscar la llamada de API “ApplianceVcenterSettingsV1ConfigCreate” en el API Explorer. La respuesta nos dará una estructura del body, que podemos usar en conjunto con la configuración actual para crear un perfil.

Para crear un perfil a partir de la configuración actual, podemos hacer una llamada GET a la siguiente ruta:

“`

https:///api/appliance/vcenter/settings/v1/config

“`

La respuesta nos dará la configuración actual de vCenter, que podemos utilizar para crear un perfil.

Para verificar la configuración actual vs el perfil creado, podemos hacer una llamada GET a la siguiente ruta:

“`

https:///api/appliance/vcenter/settings/v1/config/tasks

“`

La respuesta nos dará el ID del task, que podemos utilizar para consultar el resultado.

En este ejemplo practico, hemos creado un perfil básico para pruebas y lo hemos utilizado para verificar la configuración actual vs el perfil creado. También hemos utilizado Postman para interactuar con el API de vCenter Profiles.

Esperamos que este ejemplo práctico les haya dado una idea de cómo comenzar a hacer uso de los diferentes APIs de vCenter Profiles. ¡Hasta el próximo artículo!

Unlocking the Power of Cloud Architecture with Horizon 8 APIs

Arquitectura en la Nube: REST API de Horizon para la Automatización de Ambientes

Como profesionales de TI, sabemos que la automatización es una herramienta fundamental para el funcionamiento eficiente y seguro de nuestros ambientes de tecnología. En el contexto de la nube, la automatización se vuelve aún más importante ya que los servidores y aplicaciones se encuentran en entornos virtuales y se pueden desplegar en diferentes ubicaciones geográficas.

En este artículo, exploraremos el REST API de Horizon, un powerful tool for automating the deployment and management of virtual desktop infrastructure (VDI) and application environments. We will discuss the benefits of using REST APIs, how to authenticate and execute REST calls, and provide examples of how to use the Horizon REST API to automate tasks such as deploying and managing desktops and applications.

Benefits of Using REST APIs

—————————–

Using REST APIs offers several benefits for our automation efforts:

### 1. Platform independence

REST APIs allow us to integrate with different platforms and systems without the need for native integrations. This means we can use a single set of APIs to interact with multiple systems, making it easier to manage and maintain our automation scripts.

### 2. Version control

Since REST APIs are defined using standard protocols such as HTTP and JSON, we can easily version control our API calls and track changes over time. This helps us maintain a history of our API calls and identify any issues or errors that may arise.

### 3. Scalability

REST APIs are highly scalable, allowing us to integrate with large-scale systems and applications without worrying about performance degradation. This is especially important in the cloud environment, where resources can be easily scaled up or down as needed.

How to Authenticate and Execute REST Calls

—————————————-

To use the Horizon REST API, we need to authenticate our requests first. We can do this by including a valid authentication token in our API calls. Here’s an example of how to authenticate and execute a REST call using Postman:

1. Download the Postman collection for the Horizon REST API from .

2. Import the collection into Postman and select the appropriate authentication method (e.g., username and password, OAuth, etc.).

3. Include the authentication token in the request headers or query parameters, depending on the API endpoint.

4. Execute the REST call by sending a GET, POST, PUT, or DELETE request to the appropriate endpoint.

Examples of Using the Horizon REST API

—————————————–

Now that we have authenticated and executed our first REST call, let’s explore some examples of how to use the Horizon REST API to automate tasks such as deploying and managing desktops and applications:

### 1. Deploying Desktops

We can use the `POST /rest/desktops` endpoint to deploy new desktops. Here’s an example of a POST request using Postman:

“`bash

POST /rest/desktops

Content-Type: application/json

{

“name”: “My Desktop”,

“description”: “My desktop for testing purposes”,

“template”: “path/to/desktop/template.vm”

}

“`

This will create a new desktop with the specified name and description, and use the specified template to deploy it.

### 2. Managing Applications

We can use the `GET /rest/applications` endpoint to retrieve a list of all applications in our Horizon environment. Here’s an example of a GET request using Postman:

“`bash

GET /rest/applications

Accept: application/json

“`

This will return a JSON array of all applications in our environment, including their name and description.

### 3. Updating Desktop Properties

We can use the `PUT /rest/desktops/{desktopId}` endpoint to update the properties of an existing desktop. Here’s an example of a PUT request using Postman:

“`bash

PUT /rest/desktops/1234567890

Content-Type: application/json

{

“name”: “My Desktop – updated”,

“description”: “My desktop for testing purposes, updated”

}

“`

This will update the name and description of the desktop with the specified ID.

Conclusion

———-

In this article, we have explored the REST API of Horizon and how to use it to automate tasks such as deploying and managing desktops and applications. We have discussed the benefits of using REST APIs, how to authenticate and execute REST calls, and provided examples of how to use the Horizon REST API.

By leveraging the power of REST APIs, we can easily integrate our Horizon environments with other systems and automate many of the repetitive tasks involved in managing virtual desktops and applications. This not only saves us time and effort but also helps ensure consistency and reliability across our environment.

Back to Basics

High Availability (HA) en vSphere: Una Visión de Arquitectura

Como profesional de la virtualización, es importante comprender cómo funciona High Availability (HA) en vSphere para diseñar y configurar entornos virtuales confiables y escalables. En este artículo, profundizaremos en el funcionamiento básico de HA en vSphere y discutiremos temas de diseño importantes que debemos considerar al implementar HA en nuestros entornos virtuales.

¿Qué es High Availability en vSphere?

High Availability (HA) es una característica de vSphere que permite a los administradores crear clusters de servidores virtionales (ESXi) que sean altamente disponibles y fault-tolerant. HA se basa en Fault Domain Manager (FDM), que es un componente crítico del cluster que se encarga de detectar y reparar fallos en los nodos del cluster.

Componente de High Availability en vSphere

El componente de HA en vSphere se compone de tres elementos principales:

1. Agente FDM: Este agente corre en cada host ESXi que forma parte del cluster habilitado con HA. El agente FDM se encarga de monitorear el estado de los nodos del cluster y notificar a vCenter Server en caso de fallos.

2. HOSTD: Este elemento no es un componente oficial de FDM, pero es crítico para su buen funcionamiento. HOSTD proporciona información sobre las VMs que están registradas en los hosts ESXi y se utiliza para mantener la consistencia de los datos entre los nodos del cluster.

3. vCenter Server: Este elemento se encarga de configurar los agentes FDM en los hosts que forman parte del cluster de HA, comunicar los cambios en el cluster y proteger y desproteger las VMs cuando estas son encendidas o apagadas.

Arquitectura de High Availability en vSphere

La arquitectura de HA en vSphere se basa en “Masters” y “Slaves”. Los diferentes nodos del cluster de HA se comunican entre sí de múltiples maneras, como por ejemplo:

1. Heartbeat: los nodos del cluster se comunican mediante un protocolo de corazón llamado heartbeat, que permite a los nodos conocer el estado de los demás nodos y detectar fallos.

2. VMware Tools: los nodos del cluster utilizan VMware Tools para comunicarse entre sí y mantener la consistencia de los datos.

3. vMotion: este protocolo permite a los nodos del cluster mover virtualmente las VMs entre los hosts del cluster sin interrupciones en el servicio.

Múltiples Posibles Estados de los Nodos del Cluster de HA

Existen varios posibles estados de los nodos del cluster de HA, incluyendo:

1. Online: el nodo está disponible y puede procesar solicitudes.

2. Offline: el nodo no está disponible y no puede procesar solicitudes.

3. Degraded: el nodo está disponible, pero tiene un componente fallido.

4. Suspended: el nodo está disponible, pero ha sido suspendido manualmente por el administrador.

Conclusión

High Availability en vSphere es una característica crítica para la confiabilidad y escalabilidad de nuestros entornos virtuales. Es importante comprender cómo funciona HA en vSphere para diseñar y configurar entornos virtuales que sean altamente disponibles y fault-tolerant. En el siguiente artículo de esta serie, profundizaremos en temas de diseño importantes que debemos considerar al implementar HA en nuestros entornos virtuales.

VMware Cloud Foundation

Consideraciones de Diseño en un Ambiente de VCF: Topologías y Ventajas

Como profesional de la arquitectura de red, me complace compartir mis conocimientos y experiencias en el diseño de ambientes de Virtualized Computing Facility (VCF). En este artículo, quiero platicarles sobre las consideraciones de diseño en un ambiente de VCF, específicamente en cuanto a topologías y ventajas de multiples WLDs vs WLDs con múltiples clusters.

Topologías de VCF

Existen dos opciones principales cuando se está diseñando un ambiente de VCF: arquitectura estándar y arquitectura consolidada. La elección de una u otra dependerá de las necesidades específicas del entorno y de los objetivos de la organización.

Arquitectura Estándar

En esta topología, cada WLD (Workload Domain) tiene su propio cluster, lo que permite una mejor gestión y aislamiento de los trabajos cargados en cada uno de ellos. Esta opción es recomendable para entornos con diversas workloads que requieren diferentes niveles de recursos y configuración.

Arquitectura Consolidada

En esta topología, todos los WLDs comparten los mismos clusters, lo que permite una mayor eficiencia y reducción de costos en términos de hardware y gestión. Esta opción es recomendable para entornos con workloads similares que requieren alta disponibilidad y scalabilidad.

Ventajas de Multiples WLDs vs WLDs con Múltiples Clusters

Hay varias ventajas en utilizar multiples WLDs en lugar de múltiples clusters dentro de un solo WLD:

1. Mejora en la gestión y aislamiento de workloads: Cada WLD puede tener su propia configuración y recursos dedicados, lo que permite una mejor gestión y aislamiento de las workloads.

2. Mayor flexibilidad: Los WLDs pueden ser creados y configurados de manera independiente, lo que permite una mayor flexibilidad en términos de la selección de workloads y la asignación de recursos.

3. Reducción de costos: La utilización de multiples WLDs puede reducir los costos asociados con la adquisición y mantenimiento de hardware, así como con la gestión y administración de los clusters.

4. Mejora en la escalabilidad: La utilización de multiples WLDs permite una mayor escalabilidad en términos de recursos y workloads, lo que puede ser beneficioso para entornos con crecientes demandas de trabajo.

Conclusión

En conclusión, el diseño de un ambiente de VCF debe tener en cuenta las necesidades específicas del entorno y los objetivos de la organización. Las topologías estándar y consolidada tienen sus ventajas y desventajas, y la elección de una u otra dependerá de las consideraciones de diseño y de las necesidades de cada caso. Además, la utilización de multiples WLDs puede ser beneficiosa en términos de gestión y escalabilidad, pero esto debe ser evaluado en función de las necesidades específicas del entorno. Espero que esta información haya sido útil para ustedes. ¡Esperamos sus próximas preguntas!

Unlocking the Power of Cloud Computing with VMware Cloud Foundation

Arquitectura en la Nube: Understanding VMware Cloud Foundation (VCF)

As a VMware enthusiast, I am excited to share my knowledge of VMware Cloud Foundation (VCF), an innovative solution that integrates all the technologies I love in the stack of VMware. VCF is a “stack” of products that includes vSphere, vSAN, NSX, and different product versions for managing the lifecycle of the platform, all of which have been tested and listed in a Bill Of Materials (BOM). This article will delve into the unique aspects of VCF and its features, as well as discuss Workload Domains (WLDs) and Management Workload Domain (MGMT WLD).

VCF Overview

————

VMware Cloud Foundation (VCF) is an integrated stack of products that includes vSphere, vSAN, NSX, and different product versions for managing the lifecycle of the platform. The following image shows the different versions of components included in VCF 4.0:

[Image: VCF 4.0 Components]

The advantage of having a BOM is that it eliminates the need to verify compatibility between the different components we want to include in a design, as VMware engineering has already spent countless hours verifying compatibility, scalability, and identifying potential bugs. This ensures that the setup and operation of a VCF environment are predictable.

Two Unique Components of VCF

——————————-

There are two unique components in VCF: vCenter Server and NSX-T. All other components are familiar to us, such as vSAN, NSX, and vSphere. We can deliver solutions like PKS and Workspace ONE within VCF following what is known as “prescriptive guidance” (VVD) or a “Manual Guidance.”

What is a Workload Domain?

——————————

From the perspective of VCF, a Workload Domain (WLD) is considered a logical SDDC that can be composed of one or more vSphere clusters. There are two types of WLDs:

1. Management WLD (MGMT WLD): This is configured during the initial “bring-up” and includes an instance of vCenter Server dedicated to each WLD.

2. Workload WLD (VI WLD): These are computing WLDs that can be composed of one or more vSphere clusters.

Management Workload Domain

—————————

The Management Workload domain is where all management components execute, such as SDDC Manager, vCenter, the NSX Manager instance, and a cluster of NSX Edge for admitting virtual networks (if enabled during bring-up). Some considerations for MGMT WLD include:

* There is a consolidated architecture model where we can execute VI workloads within the same Management WLD. In this case, resource pool segmentation will be used to separate resources between management components and workloads. We will discuss the different topologies supported by VCF in a separate article.

Virtual Infrastructure Workload Domain

————————————–

The VI WLDs are where we execute workloads, assuming a standard VCF architecture (MGMT + VI WLDs) and not a collapsed one. Creating these VI WLDs is done through SDDC Manager, and once created, they will appear in the vSphere inventory as follows:

[Image: vSphere Inventory with MGMT and VI WLDs]

I hope this article provides a good introduction to VCF and its features. Stay tuned for more technical and design-related articles on VCF.

Best regards!

Bitfusion and VMware

Agustín Malanco is a VMware CTO Ambassador and recently attended the OCTO Global Field & Industry program, where he had the opportunity to learn about new technologies and architectures that are being developed or have already been introduced within the VMware ecosystem. One of the most striking new technologies he encountered was Bitfusion, which was acquired by VMware a few months ago.

Bitfusion is a solution that enables the creation of a distributed pool of GPU resources, allowing applications to access these resources as if they were local, thereby improving the utilization of available resources and enabling more flexible and scalable computing environments. This technology has the potential to revolutionize the way we approach computing and data processing, particularly in fields such as machine learning (ML), artificial intelligence (AI), and big data.

So, what do these technologies have in common? They all rely on the use of GPUs to process large amounts of data quickly and efficiently. However, traditional methods of accessing GPU resources have been limited by the need for local access and management of these resources, which can lead to silos within data centers and suboptimal resource utilization.

Bitfusion changes this by providing a distributed pool of GPU resources that can be accessed by applications as needed, without the need for physical local access or rearchitecture of applications. This allows for more flexible and scalable computing environments, and enables applications to take advantage of the vast processing power of GPUs without the limitations of traditional GPU access methods.

The architecture of Bitfusion is designed to be simple and straightforward, with three main components:

1. The Bitfusion Client: This component provides a simple interface for applications to request access to the distributed pool of GPU resources.

2. The Bitfusion Server: This component manages the pool of GPU resources and directs requests from the client to the appropriate resource.

3. The GPU Resources: These are the actual GPU resources that are being pooled and made available for access by applications.

There are several key considerations when designing with Bitfusion, including:

1. Resource Management: Bitfusion must be able to manage the pool of GPU resources effectively to ensure that they are utilized efficiently and that there is no wasted capacity.

2. Application Compatibility: Bitfusion must be able to work seamlessly with a wide range of applications, without requiring any modifications or rearchitecture of these applications.

3. Security: Bitfusion must provide robust security features to ensure the integrity and confidentiality of data being processed by the GPU resources.

Overall, Bitfusion represents an exciting development in the field of computing and data processing, and has the potential to enable new use cases and applications that were previously not possible. As more and more organizations look for ways to harness the power of GPUs, solutions like Bitfusion will play an increasingly important role in enabling flexible, scalable, and efficient computing environments.

Unlocking the Power of Cloud Storage

Agustín Malanco’s Blog: vSAN and Native AWS Storage Services in VMC

In our previous article, we introduced the series of articles on VMware Cloud on AWS (VMC) and discussed the architecture of vSAN, a critical component of the SDDC (Software-Defined Data Center) solution. Today, we will delve deeper into the topic of storage in VMC, discussing vSAN and native AWS storage services, and how they can be integrated with VMC.

vSAN: A Key Component of SDDC

As we mentioned earlier, vSAN is a critical component of the SDDC solution, providing a software-defined storage layer that pools local disk resources from participating servers to create a shared, centralized storage pool. In an VMC environment, vSAN is used to provide storage for virtual machines (VMs), and it can be configured in different ways depending on the hardware profile of the servers being used.

For example, in the case of an i3 server profile, each server has 8 storage devices (NVMe drives), as shown in the following configuration:

[INST01] -> 8x NVMe drives

With this configuration, we can achieve the following benefits:

* All NVMe drives are assigned to vSAN, while a separate Elastic Block Store (EBS) volume is allocated for boot purposes.

* Elastic vSAN provides flexibility in terms of capacity requirements for instances of type R5.

Elastic vSAN: A New Capacity Option for R5 Instances

In addition to the standard vSAN configuration, VMC also offers a new capacity option called Elastic vSAN, which is specifically designed for instances of type R5 (Bare Metal on AWS). In this case, the physical servers are considered “diskless,” with only a single EBS volume for boot purposes and no local disks. The storage requirements for these instances are fulfilled by adding additional EBS volumes as needed:

[INST01] -> 1x EBS (boot) + N x EBS (storage)

The Elastic vSAN feature allows you to dynamically add or remove EBS volumes as needed, providing a flexible and scalable storage solution for R5 instances. This feature is particularly useful for workloads that require a high degree of flexibility in terms of storage capacity.

Comparison of i3 and R5 Instances

When selecting between i3 and R5 instances, it’s important to consider the following factors:

| Criteria | i3 Instances | R5 Instances |

| — | — | — |

| Local Storage | 8x NVMe drives | No local storage (diskless) |

| Boot Volume | Separate EBS volume | Single EBS volume for boot |

| Capacity | Up to 35TiB (adjustable in 5TiB increments) | 15-35TiB (adjustable in 5TiB increments) |

| Compression, Encryption, and Deduplication | Supported | Supported |

Based on these factors, you can choose the appropriate instance type based on your specific requirements. For example, if you need a high degree of local storage capacity and support for compression, encryption, and deduplication, an i3 instance may be the better choice. However, if you require a more flexible and scalable storage solution with a lower upfront cost, an R5 instance may be more suitable.

Conclusion

In conclusion, vSAN and native AWS storage services are two critical components of the SDDC solution in VMC. By understanding how these components work together, you can select the appropriate instance type based on your specific requirements and create a highly available, scalable, and secure storage environment for your virtual machines. Stay tuned for our next article, where we will delve deeper into technical topics such as storage performance, availability, and resilience in VMC.

VMC 101

Introduction to VMware Cloud on AWS: A New Era in Hybrid Cloud Computing

In the ever-evolving world of cloud computing, VMware has recently announced its newest offering: VMware Cloud on AWS. This innovative solution allows organizations to run VMware workloads on the Amazon Web Services (AWS) infrastructure, providing a seamless hybrid cloud experience. As a VCDX #141 and a passionate advocate for all things virtualization, I am excited to share my insights on this groundbreaking technology.

What is VMware Cloud on AWS?

VMware Cloud on AWS is an SDDC (Software-Defined Data Center) solution that runs on top of the AWS infrastructure. It combines the power of VMware’s vSphere, vSAN, and NSX with the scalability and flexibility of AWS. This marriage of technologies creates a robust hybrid cloud platform that enables organizations to run their workloads across both on-premises and cloud environments.

Architecture Overview

Let’s take a high-level look at the architecture of VMware Cloud on AWS. As shown in the conceptual diagram below, we have two main components: the SDDC (Software-Defined Data Center) and the Hybrid Linked Mode.

SDDC (Software-Defined Data Center): The SDDC is the foundation of VMware Cloud on AWS. It consists of vSphere, vSAN, and NSX, all running on top of AWS infrastructure. This provides a centralized management platform for all workloads, whether they are on-premises or in the cloud.

Hybrid Linked Mode: Hybrid Linked Mode enables seamless integration between on-premises environments and VMware Cloud on AWS. It allows administrators to manage both environments from a single interface, providing a consistent experience across all workloads.

What is a SDDC?

A SDDC (Software-Defined Data Center) is the fundamental building block of VMware Cloud on AWS. It consists of a group of computers that are virtualized and managed by vCenter Server. A SDDC can contain multiple clusters, each of which can have multiple hosts, providing a highly scalable infrastructure for running workloads.

Benefits of VMware Cloud on AWS

There are several benefits to using VMware Cloud on AWS:

1. Hybrid Cloud: With VMware Cloud on AWS, organizations can enjoy the benefits of both on-premises and cloud environments. They can run their workloads in a hybrid cloud setup, leveraging the strengths of each environment.

2. Scalability: The AWS infrastructure provides scalability and flexibility, allowing organizations to quickly adjust their resources as needed.

3. Centralized Management: VMware Cloud on AWS provides centralized management across both on-premises and cloud environments, making it easier to manage workloads and maintain consistency.

4. Enhanced Security: With NSX, organizations can enjoy enhanced security features such as micro-segmentation and network encryption.

5. Cost-Effective: By leveraging the scalability of AWS, organizations can run their workloads in a cost-effective manner, only paying for the resources they need.

Conclusion

VMware Cloud on AWS is a game-changer in the world of hybrid cloud computing. It provides organizations with a seamless way to run their workloads across both on-premises and cloud environments, offering numerous benefits such as scalability, centralized management, enhanced security, and cost-effectiveness. As a VCDX #141, I am excited to see how this technology will evolve and empower organizations to achieve their goals in the ever-changing landscape of cloud computing.

The Perils of Updating Azure ARC Bridge

The blog post discusses the process of updating Azure ARC Bridge, a tool that allows for the integration of different on-premises platforms, such as VMware vSphere, with Microsoft Azure. The author describes the architecture of the bridge and how it works, including the use of deployment scripts and the creation of vSphere templates. The post also touches on the importance of having access to the necessary files and pliks for the update process.

The author shares their personal experience with updating the bridge, including some unexpected issues that arose during the process. They also mention that the update process can be done manually or automated, depending on the user’s preference. The post concludes by inviting readers to subscribe to the blog for more updates and information on IT topics, and by offering a list of useful resources for those interested in learning more about cloud computing and VMware technologies.

Overall, the blog post provides a detailed overview of the Azure ARC Bridge and its update process, as well as some practical tips and resources for those working with these technologies.

Navigating the Azure CLI for Azure VMware Solution

Hi there! As a cloud computing enthusiast and a VMware expert, I’m excited to share my thoughts and experiences with you on this blog. My name is Sebastian Grugel, and I’ve been working in the IT industry since 1992, with a focus on VMware technologies since 2004. Currently, I’m leading a team as a Technical Leader at Nordcloud, where we develop hybrid cloud solutions based on VMware technology.

I’m passionate about sharing my knowledge and experience with others, and that’s why I created this blog. Here, you’ll find articles, tutorials, and insights into the world of cloud computing, VMware, and Microsoft technologies. My goal is to provide valuable information and inspire others to explore the exciting world of IT.

I believe that automation is the future of IT, and that’s why I’m always looking for new and innovative ways to automate processes using tools like Azure DevOps. I also enjoy sharing my experiences and thoughts on various IT topics through podcasts, such as Z Pasją o IT, which you can find on my website academia-datacenter.pl.

If you’re interested in learning more about cloud computing, VMware, or Microsoft technologies, or if you have any questions or need help with a project, feel free to reach out to me. I’d be happy to assist you in any way I can. And don’t forget to check out my list of recommended tools and resources for IT professionals on my website.

Thanks for stopping by, and I hope you enjoy exploring this blog!