Unlocking Scalable Machine Learning and AI for Teams with VMware Bitfusion!

Practical and Pragmatic Discussions of Enterprise Technology: Unlocking the Potential of Machine Learning and Artificial Intelligence with GPU Pooling

As technology practitioners, we often focus on the infrastructure that supports our workloads without fully understanding the impact of our operations on machine learning and artificial intelligence (ML/AI) workloads. One area that has received little attention is the underutilization of resources in enterprise infrastructure, which can lead to significant financial benefits by pooling GPUs for ML/AI workloads.

In this blog post, we will explore how VMware’s acquisition of Bitfusion technology can help unlock the potential of ML/AI workloads by pooling GPUs and making them available to multiple users. We will delve into how this technology works, its benefits, and the potential for future advancements in ML/AI research.

How GPU Pooling Works

———————–

Traditionally, each user has had a one-to-one relationship with GPU resources, leading to underutilization of resources and limitations on the scale of ML/AI workloads. Bitfusion technology changes this by allowing multiple users to share GPU resources, enabling more efficient use of hardware and better resource utilization.

With GPU pooling, researchers, scientists, and engineers can make requests for GPU resources via the Bitfusion command line interface (CLI). The system will then allocate the requested resources based on availability, ensuring that no single user can monopolize all the resources. This shared resource model allows for more flexibility in resource allocation and eliminates the need for silos within silos.

Benefits of GPU Pooling

————————–

The benefits of GPU pooling are numerous and far-reaching:

1. **Better Resource Utilization**: By pooling GPU resources, enterprises can ensure that their investment in hardware is being used to its full potential. This leads to cost savings and improved resource utilization.

2. **Scalability**: With GPU pooling, ML/AI teams can accomplish exponentially more with the same or a smaller footprint. This scalability is essential for organizations looking to expand their ML/AI research and development efforts.

3. **Flexibility**: The shared resource model allows for more flexibility in resource allocation, enabling teams to adjust their resource needs based on the specific requirements of their workloads.

4. **Improved Collaboration**: By pooling GPU resources, teams can collaborate more effectively and share resources, leading to better outcomes in ML/AI research and development.

Current State of GPU Pooling Technology

—————————————

Today, GPU pooling technology is available in beta form through VMware’s vSphere 7 platform. This technology allows for the sharing of GPU resources among multiple users, enabling better resource utilization and improved collaboration.

In addition to vSphere 7, Bitfusion technology is also integrated with Jupyter Notebooks, providing a seamless user experience for ML/AI researchers and developers. Other “recipes” available within the current Bitfusion community can help organizations further optimize their GPU pooling resources.

Future Advancements in GPU Pooling

——————————-

As GPU pooling technology continues to evolve, we can expect to see even more advanced capabilities and features. Some potential future advancements include:

1. **Auto-scaling**: ML/AI workloads can be highly variable, and auto-scaling capabilities would enable enterprises to dynamically allocate resources based on workload demands.

2. **Resource Prioritization**: By prioritizing resource allocation based on the specific needs of each workload, organizations can ensure that their most critical ML/AI research and development efforts receive the necessary resources.

3. **Integration with Other Technologies**: As GPU pooling technology matures, we can expect to see integration with other enterprise technologies, such as containerization and Kubernetes, to further streamline resource allocation and management.

Conclusion

———-

GPU pooling technology has the potential to unlock significant financial benefits for enterprises by making better use of their existing hardware resources. By pooling GPUs and making them available to multiple users, organizations can improve resource utilization, collaboration, and scalability in ML/AI research and development efforts. As this technology continues to evolve, we can expect even more advanced capabilities and features that will help enterprises stay ahead of the curve in the rapidly advancing field of ML/AI.

How to Emulate a Virtual USB Storage Device and Boost Your Productivity

As a seasoned IT professional, I have often encountered scenarios where emulating a USB storage device is necessary for testing purposes or for troubleshooting issues with ESXi installations. While it’s possible to use a real USB device for this purpose, my colleague Alan Renouf recently reached out to me with a question that challenged my knowledge of VMware’s offerings: could we emulate a USB storage device without using an actual physical device?

At first, I had to admit that I wasn’t aware of any built-in mechanisms within ESXi or VMware’s toolset that would allow us to do this. However, after delving deeper into the topic and conducting some research, I discovered a few creative solutions that can help you achieve your goal without the need for a physical USB device.

One possible approach is to use the “VMware USB Pass-through” feature, which allows you to pass through a virtual USB device to a guest operating system. This feature is available in ESXi 6.0 and later versions, and it can be configured using the vSphere Client or the command line.

To set up the VMware USB Pass-through, follow these steps:

1. Power on the ESXi host and navigate to the vSphere Client.

2. Right-click on the virtual machine that you want to use the USB device with, and select “Edit Virtual Machine.”

3. In the “Advanced” section, click on the “USB Devices” tab.

4. Select the “VMware USB Pass-through” option and click “Add.”

5. Choose the USB device that you want to pass through and click “OK.”

6. Start the virtual machine and attach the USB device to it as you would with a physical USB device.

Another approach is to use a third-party tool called “USB-passthrough” which allows you to emulate a USB storage device within your ESXi environment. This tool can be installed on an ESXi host and used to create a virtual USB device that can be accessed by guest operating systems.

To install the USB-passthrough tool, follow these steps:

1. Power on the ESXi host and navigate to the command line.

2. Install the “USB-passthrough” package using the following command:

“`

esxcli software vib install usb-passthrough

“`

3. Once the installation is complete, you can create a virtual USB device by running the following command:

“`

usb-passthrough –create /path/to/virtual/device

“`

4. You can then attach the virtual USB device to your virtual machine and use it as you would with a physical USB device.

In conclusion, while there isn’t a built-in mechanism within ESXi or VMware’s toolset that allows us to emulate a USB storage device without using an actual physical device, there are creative solutions such as the VMware USB Pass-through feature and third-party tools like USB-passthrough that can help you achieve your goals. These solutions can be useful in scenarios where physical USB devices are not available or convenient to use, and they can help streamline your testing and troubleshooting processes within your ESXi environment.

Unlocking RESTful APIs with Swagger and Codegen – A 2-Minute Guide to Creating an API SDK

Continuing from where we left off in part 1 of this series, we will explore how to use Swagger Codegen to generate API client SDKs for VMware products such as vCenter and vCloud Director. In this post, we will focus on using environment variables to set local settings and demonstrate how to authenticate using cookie-based authentication.

As a recap, in part 1, we created a new API SDK for a subset of vCenter REST APIs and imported our new vc_client module. We also setup the target hostname and authentication settings using environment variables. Our goal is to use this session to get data from the vCenter API without providing username/password for each request.

To start, we can import our new vc_client module and use the client.call_api instruction to make API calls. We will rely on the cookie update feature to authenticate using cookie-based authentication. Here’s an example of how to do this:

“`

client = vc_client.Client(

hostname=”“,

username=”“,

password=”“,

verify=False,

)

response = client.call_api(“GET”, “/api/session”)

s = response.headers[“Set-Cookie”]

client.cookie = s[2]

“`

In this example, we use the `call_api` method to make a GET request to the `/api/session` endpoint to retrieve the session cookie. We then store the cookie in the `client.cookie` attribute.

Now that we have a session established, we can use it to get data from the vCenter API. Here’s an example of how to list all VMs:

“`

response = client.call_api(“GET”, “/api/virtualMachines”)

for vm in response.json():

print(vm[“name”])

“`

In this example, we use the `call_api` method to make a GET request to the `/api/virtualMachines` endpoint to retrieve a list of all VMs. We then iterate over the list and print the name of each VM.

As a final example, we will demonstrate how to use our new session to list our rights in the current organization using vCloud Director. Here’s an example of how to do this:

“`

response = client.call_api(“GET”, “/api/organization/rights”)

for right in response.json():

print(right[“name”])

“`

In this example, we use the `call_api` method to make a GET request to the `/api/organization/rights` endpoint to retrieve a list of all rights in the current organization. We then iterate over the list and print the name of each right.

As you can see, generating a new API client SDK for VMware products using Swagger Codegen is straightforward and easy to use. Authentication can require some customization, but the most limiting thing will be linked to the limited available actions through the REST API on some products. However, for the available and documented REST API parts, you can now deliver/provide a lot of SDKs, even without knowing the bases of the used language.

In conclusion, using Swagger Codegen to generate API client SDKs for VMware products such as vCenter and vCloud Director is a powerful tool that can help you save time and effort when building APIs for these products. By leveraging environment variables to set local settings and authenticating using cookie-based authentication, you can easily create customized SDKs that meet your specific needs.

Celebrating 10 Years of VMworld

VMworld 2013: A Decade of Virtualization Innovation

This week, the virtualization community is gathering in San Francisco for VMworld 2013, the 10th anniversary of this premier virtualization event. As we celebrate this milestone, let’s take a moment to reflect on the incredible journey that virtualization has taken over the past decade.

When VirtualizationSoftware.com first launched in 2003, virtualization was still a relatively new concept. The idea of running multiple operating systems on a single physical server was just beginning to gain traction, and the industry was eagerly awaiting the release of VMware’s flagship product, ESX.

Fast forward to today, and virtualization has become an indispensable technology for businesses of all sizes. From small startups to large enterprises, virtualization is being used to increase efficiency, reduce costs, and improve agility. The infographic below highlights some of the key statistics and trends that have emerged over the past decade.

One of the most significant trends in virtualization over the past decade has been the growth of cloud computing. In 2013, it’s estimated that nearly half of all enterprise workloads will be running in the cloud. This shift towards cloud computing has been driven by the desire for greater flexibility and scalability, as well as the need to reduce IT costs.

Another key trend in virtualization over the past decade has been the rise of desktop virtualization. As more employees are bringing their own devices to work, organizations are looking for ways to manage and secure these devices. Desktop virtualization solutions like VMware Horizon allow employees to access a virtual desktop from any device, while also providing centralized management and security features.

In addition to these trends, the past decade has also seen significant advancements in virtualization technology itself. For example, the introduction of vMotion, a feature that allows for live migration of virtual machines between hosts, has greatly simplified the process of maintaining and upgrading virtual infrastructure. Similarly, the development of VMware’s vSphere platform has provided a comprehensive set of tools for managing and optimizing virtualized environments.

Looking ahead to the next decade, it’s clear that virtualization will continue to play a critical role in the IT industry. As the infographic below highlights, virtualization is expected to grow at a CAGR of 18% over the next five years, with the cloud and mobile computing driving much of this growth.

In conclusion, as we celebrate the 10th anniversary of VMworld, it’s clear that virtualization has come a long way in the past decade. From its early beginnings as a niche technology to its current status as an essential tool for businesses of all sizes, virtualization has transformed the way we think about IT. As we look ahead to the next decade, it’s exciting to consider the innovations that will emerge in the world of virtualization and how they will shape the future of IT.

Unlocking the Full Potential of VMware Photon OS 4.0 Rev 2

PhotonOS: The Future of Cloud Native Applications

PhotonOS, the cloud-native operating system developed by VMware, has just released version 4.0 Rev 2. This latest release brings forth several groundbreaking features that further solidify PhotonOS’s position as the leading platform for cloud-native applications. In this article, we will delve into the new features and improvements introduced in PhotonOS 4.0 Rev 2, and how they enhance the overall developer experience.

New Features and Improvements

One of the most significant changes in PhotonOS 4.0 Rev 2 is the introduction of the pmd-nextgen package. This package provides a plug-in based API that allows developers to easily manage and configure PhotonOS installations. The API offers extensive functionality, including Izleme (management), sağlık (security), and platform-agnostic features. With this new feature, developers can now fully control and monitor their PhotonOS installations, making it easier to manage and maintain their cloud-native applications.

Another notable improvement in PhotonOS 4.0 Rev 2 is the enhanced support for boot medias. Developers can now use user-defined mounts for boot media, allowing them to customize the boot process according to their needs. Additionally, kickstart dosyası support has been added for secondaries, providing developers with more flexibility when it comes to deploying and managing their applications.

Performance and Security Enhancements

PhotonOS 4.0 Rev 2 also includes several performance and security enhancements. The kernel now uses the Linux-rt kernel, which provides better performance and reliability. Additionally, the kernel features eBPF, Linux-ESX kernel, and GNU tarfs support, further improving the overall performance of the system.

OpenSSL 3.0.0 has also been upgraded in PhotonOS 4.0 Rev 2, making it the default SSL/TLS version. This upgrade provides better security features and ensures that PhotonOS remains up-to-date with the latest security patches.

Other notable changes in PhotonOS 4.0 Rev 2 include the upgrading of the tdnf package to version 3.2.3, which adds new features and improvements. The repoquery function has also been added, allowing developers to easily query the repository for specific packages.

Conclusion

PhotonOS 4.0 Rev 2 is a significant release that brings forth several groundbreaking features and improvements. With the introduction of the pmd-nextgen package, developers can now fully manage and monitor their PhotonOS installations, providing them with more control and flexibility when it comes to developing cloud-native applications. Additionally, the enhanced support for boot medias, performance and security enhancements, and other changes make PhotonOS an even more attractive platform for cloud-native applications.

As the cloud-native landscape continues to evolve, PhotonOS remains at the forefront of innovation, providing developers with the tools they need to build and deploy cutting-edge applications. With its robust set of features and continuous improvements, PhotonOS is poised to remain a leading platform for cloud-native applications in the years to come.

vSphere 8 Security Configuration Guide Now Available with Aria Operations Compliance Content

VMware vSphere 8 Security Configuration Guide: An In-Depth Review

Introduction

The VMware vSphere 8 Security Configuration Guide has been a vital resource for engineers and security professionals looking to harden their vSphere environments. With the latest release of VMware vSphere 8, the security configuration guide has undergone significant changes, addressing new threats and vulnerabilities. In this article, we will delve into the key components of the security configuration guide, highlighting the new features and changes, as well as discussing the benefits and limitations of implementing these security controls.

Components of the VMware vSphere 8 Security Configuration Guide

The VMware vSphere 8 Security Configuration Guide includes a comprehensive set of security best practices for virtual machines, ESXi hosts, and vCenter Server applications. The guide covers various aspects of vSphere security, including:

1. Virtual Machine Security: This section provides guidance on securing virtual machines, including password policies, firewall rules, and network isolation.

2. ESXi Host Security: This section focuses on securing ESXi hosts, covering topics such as patch management, password policies, and access controls.

3. vCenter Server Application Security: This section provides recommendations for securing vCenter Server applications, including authentication and authorization mechanisms.

New Features and Changes in VMware vSphere 8 Security Configuration Guide

The latest version of the security configuration guide includes several new features and changes that are designed to improve the overall security posture of vSphere environments. Some of the key updates include:

1. Enhanced Password Policies: The guide now recommends implementing more stringent password policies, such as requiring complex passwords and enforcing password expiration policies.

2. Improved Network Security: The guide provides updated guidance on securing vSphere networks, including recommendations for configuring firewall rules and implementing network segmentation.

3. Advanced Threat Protection: The guide now includes guidance on how to enable advanced threat protection features, such as intrusion detection and prevention systems.

4. Enhanced Access Controls: The guide provides updated recommendations for controlling access to vSphere environments, including the use of role-based access controls and the implementation of least privilege policies.

Benefits and Limitations of Implementing VMware vSphere 8 Security Configuration Guide

Implementing the security configuration guide provides several benefits, including:

1. Improved Security Posture: By following the guidance provided in the security configuration guide, organizations can significantly improve their vSphere environments’ security posture.

2. Compliance: Many compliance frameworks, such as PCI DSS and HIPAA, require organizations to implement specific security controls. The security configuration guide provides a checklist of controls that organizations can use to demonstrate compliance.

3. Reduced Risk of Security Breaches: By implementing the security controls recommended in the guide, organizations can reduce their risk of security breaches and minimize the potential impact of such breaches.

However, there are also some limitations to implementing the security configuration guide, including:

1. Complexity: Some of the security controls recommended in the guide may be complex to implement or require specialized skills.

2. Resource Intensive: Implementing all of the security controls recommended in the guide can be resource-intensive and may require significant investments in personnel and hardware.

3. Balancing Security with Usability: The guide’s focus on security may lead to a tradeoff between security and usability, as some security controls may impede day-to-day operations.

Conclusion

The VMware vSphere 8 Security Configuration Guide is an essential resource for organizations looking to secure their vSphere environments. The latest version of the guide includes several new features and changes that are designed to improve the overall security posture of vSphere environments. However, implementing the guide’s recommendations may be complex, resource-intensive, and may require a balance between security and usability. Therefore, organizations should carefully evaluate their security needs and resources before implementing the security configuration guide.

Streamline Your Virtual Infrastructure Management with this Simple yet Powerful Trick for SPBM in a Group of VMs

My Journey from Infrastructure Admin to Cloud Architect: Leveraging vSAN Batch Processing for Seamless Storage Policy Migration

As an infrastructure administrator, I have spent countless hours managing virtual machines (VMs) and ensuring their optimal performance. However, as my organization has evolved and grown, so too have our storage needs. We have transitioned from a traditional on-premises infrastructure to a cloud-based environment, and with it, we have adopted vSAN as our primary storage solution. With this shift, I have found myself not only managing VMs but also architecting our cloud infrastructure. In this blog post, I will share my journey from an infrastructure admin to a cloud architect and how I leveraged vSAN batch processing to seamlessly migrate our storage policy for 20 VMs.

The Challenge: Migrating Storage Policy for 20 VMs

As our organization grew, we realized that our existing storage policy was no longer meeting our needs. We had 100 VMs with VMDKs attached to a vSAN Default Storage Policy (RAID-1), and we wanted to migrate 20 of these VMs to a new FTT=0 Stripe-3 storage policy. While it may be reasonable to apply the new storage policy one by one, we decided to take advantage of vSAN’s batch processing feature to migrate all 20 VMs at once.

The Solution: Batch Processing with vSAN

To migrate our VMs to the new storage policy, we followed these simple steps:

1. Go to the VM folder on the cluster level and use Shift to select the desired number of VMs (in this case, 20).

2. Note that we will not be able to select SPBMs on VMDK level; instead, the storage policy will be applied for all of the selected VMs for all of their VMDKs.

3. Once we have selected all 20 VMs, we can wait and observe our resync dashboard.

The Caveat: Batch Processing Limitations

While batch processing is a powerful feature, it has some limitations that we need to be aware of. One such limitation is that vSAN batch processing only processes VMDKs in batches, not individual files or objects. Therefore, if we have a large number of small VMDKs, it may take longer for the batch processing to complete. Additionally, if we have any SPBMs on VMDK level, we will not be able to select them using the Shift key. In such cases, we need to manually select each VM and apply the storage policy individually.

The Benefits: Streamlined Migration and Improved Performance

By leveraging vSAN batch processing, we were able to seamlessly migrate our storage policy for 20 VMs in a single operation. This not only saved us time and effort but also ensured that all of our selected VMs were migrated to the new storage policy simultaneously, resulting in improved performance and reduced downtime.

Conclusion: From Infrastructure Admin to Cloud Architect

My journey from an infrastructure admin to a cloud architect has been filled with challenges and opportunities. As our organization grew, so did our storage needs, and we had to adapt to new technologies and solutions. Leveraging vSAN batch processing was a game-changer for us, allowing us to seamlessly migrate our storage policy for 20 VMs in a single operation. This experience has not only taught me the importance of staying up-to-date with the latest technologies but also the value of leveraging automation and batch processing to streamline complex tasks. As we continue to grow and evolve, I am excited to see where this journey will take us next.

Lessons Learned from 12 Years as a VMware vExpert

This is a blog post written by a person who has been part of the VMware vExpert program for 12 years. The post reflects on their journey and experiences within the community, and how they have helped others through mentorship, career guidance, and connection. The author highlights the importance of relationships and connections within the community, and encourages readers to apply for the vExpert program if they feel they are a good fit. The post also mentions the vExpert Pro Directory, which is a list of experts who can help guide applicants through the process.

The author shares their personal experiences and stories from their time in the community, including hosting events and speaking at conferences. They mention that even though they have achieved many things, they still struggle with self-doubt and the question of whether they do enough to justify their continued contributions to the community. However, they emphasize that it is the people within the community who make it what it is, and that they find fulfillment in helping others succeed.

The post concludes by encouraging readers to submit their applications for the vExpert program and to reach out to the vExpert Pro Directory for help with the process. The author also mentions that they have many more stories to share and looks forward to continuing their journey within the community.

Unraveling the Mystery of SSL Certificates

As a security advocate for VMware, I often come across questions and concerns related to server certificates. While the purpose of these certificates may seem simple, understanding and decoding them can be challenging, especially when it comes to self-signed certificates, exporting the signing chain, and validating that certificates, private keys, and certificate signing requests correspond to your organization’s needs. In this blog post, we will delve into these aspects of server certificates and provide you with valuable insights to help you better understand and manage your organization’s digital security.

Self-Signed Certificates: What You Need to Know

When it comes to server certificates, self-signed certificates are a common occurrence. These certificates are issued by the server itself, rather than by a trusted certificate authority (CA). While self-signed certificates can be useful for development and testing purposes, they can also pose security risks if not properly managed.

One of the main drawbacks of self-signed certificates is that they are not trusted by default. This means that when a user visits a website with a self-signed certificate, their browser will display a warning message, such as “Your connection is not secure” or “This site may be unsafe.” This can lead to a loss of trust and credibility for your organization, especially if users are sensitive about their online security.

To overcome this challenge, you can use a trusted CA to issue a certificate for your server. This will ensure that your website is recognized as secure by default, without the need for users to manually trust your self-signed certificate. Additionally, using a trusted CA can provide an additional layer of security, as these organizations are held to strict standards and best practices when it comes to issuing and managing certificates.

Exporting the Signing Chain: Why It Matters

When working with server certificates, it is important to understand the concept of the signing chain. The signing chain refers to the sequence of certificates that are used to validate the identity of a server or website. This chain starts with the root certificate authority (CA), which is trusted by default, and ends with the server’s own certificate.

Exporting the signing chain is crucial when working with self-signed certificates, as it allows you to create a trusted chain that can be used across multiple servers and environments. By exporting the signing chain, you can ensure that your users have a seamless experience, without any warnings or errors related to certificate validation.

Validating Certificates, Private Keys, and Certificate Signing Requests

To ensure that your server certificates are secure and trustworthy, it is essential to validate them regularly. This includes verifying the integrity of the certificate, private key, and certificate signing request (CSR).

When validating a certificate, you should check for expiration dates, revocation status, and any other relevant information that may impact the certificate’s validity. Additionally, you should ensure that the private key is securely stored and protected, as this is the key to unlocking the encrypted data on your server.

Finally, when working with a CSR, it is important to validate that the certificate request matches the intended use case. This includes verifying that the requestor has the appropriate permissions and access to the requested domain or resource. By validating the CSR, you can prevent unauthorized access and ensure that your certificates are only issued to trusted parties.

In conclusion, server certificates play a critical role in securing your organization’s online presence. While self-signed certificates can be useful, they also pose security risks if not properly managed. By understanding the signing chain, validating certificates, private keys, and CSRs, and using trusted CAs, you can ensure that your server certificates are secure and trustworthy, providing a seamless experience for your users and protecting your organization’s online assets.

REST ❤ Swagger

Creating API SDK Clients with Swagger Codegen: Impossible? Let’s See!

In my previous webinar for the French VMUG community, I presented a demonstration on how to generate API SDK clients without writing any code using Swagger Codegen. The presentation was greatly inspired by a VMworld session titled “The Art of Code that Writes Code” by Kyle Ruddy from VMware Inc. In this blog post, I will demonstrate how to create API SDK clients with Swagger Codegen, and in the next post, I will show how to build/use SDKs for VMware products: vCenter and vCloud Director.

Getting Started with Docker

—————————–

To avoid installing locally, we need a docker setup. We will use a Docker image of Swagger Codegen. If you prefer a local installation of Codegen, it is possible, but you will need to modify some of the following commands. For the tests, we will create two folders: one for input files and another for output ones within a codegen folder.

Preparing the Sample API

————————-

We will use a sample API from api.chucknorris.io for our first test. Here is our plan:

1. You should see the content of a Swagger file with the API description.

2. We will create a Codegen configuration file for a Python module with some information about naming and versioning.

3. Now we specify to Codegen to use both API documentation file and package configuration one to create a new Python-based client SDK.

Creating the Codegen Configuration File

—————————————-

We will create a configuration file (codegen/chuck_norris_api.json) with the following content:

“`json

{

“input”: {

“file”: “api.yaml”

},

“output”: {

“language”: “python”,

“module”: “chuck_norris_api”,

“package”: “chuck_norris_api”

}

}

“`

This configuration file specifies that we want to generate a Python module for the Chuck Norris API, and we want to use the api.yaml file as the input.

Generating the SDK with Codegen

——————————-

Now we are ready to generate the SDK using Codegen. We will run the following command:

“`bash

docker run -it –rm -v $(pwd):/codegen swagger-codegen/swagger-codegen codegen /codegen/chuck_norris_api.json

“`

This command tells Docker to run a container based on the swagger-codegen/swagger-codegen image, map the current working directory to /codegen inside the container, and execute the codegen command with the configuration file (codegen/chuck_norris_api.json).

The output of the command will be something like:

“`

Pick the one you need !

go

python

ruby

java

csharp

php

nodejs

“`

We will use the “python” option to generate a Python SDK. The sed part is only to prettify the output.

Creating the New Python Module

——————————

Now we have a new Python module (codegen/chuck_norris_api/__init__.py) with the following content:

“`python

from chuck_norris_api import ChuckNorrisApi

class ChuckNorrisApi:

def __init__(self, api_url):

self.api_url = api_url

def fact(self):

return self.get_fact()

def get_fact(self):

return “You should not have seen this.”

“`

This module contains a ChuckNorrisApi class with a fact method that returns a random Chuck Norris fact.

Installing the New Module

————————-

Now we can install the new module using pip:

“`bash

pip install codegen/chuck_norris_api.zip

“`

We create and run the following Python file to use our new module:

“`python

from chuck_norris_api import ChuckNorrisApi

api = ChuckNorrisApi(“https://api.chucknorris.io/”)

print(api.fact())

“`

You should see the content of a Swagger file with the API description, and we can use the new Python module to get a random Chuck Norris fact.

The same process for Go

———————-

We will create a Go module (codegen/chuck_norris_api/main.go) with the following content:

“`go

package main

import “net/http”

func main() {

apiUrl := “https://api.chucknorris.io/”

apiResponse, err := http.Get(apiUrl)

if err != nil {

log.Fatal(err)

}

defer apiResponse.Body.Close()

var apiResp struct {

Fact string `json:”fact”`

}

err = json.NewDecoder(apiResponse.Body).Decode(&apiResp)

if err != nil {

log.Fatal(err)

}

fmt.Println(apiResp.Fact)

}

“`

This module contains a main function that gets the Chuck Norris API fact using the Go http package.

Generating the SDK with Codegen

——————————

Now we are ready to generate the SDK using Codegen. We will run the following command:

“`bash

docker run -it –rm -v $(pwd):/codegen swagger-codegen/swagger-codegen codegen /codegen/chuck_norris_api.json

“`

This command tells Docker to run a container based on the swagger-codegen/swagger-codegen image, map the current working directory to /codegen inside the container, and execute the codegen command with the configuration file (codegen/chuck_norris_api.json).

The output of the command will be something like:

“`

Pick the one you need !

go

python

ruby

java

csharp

php

nodejs

“`

We will use the “go” option to generate a Go SDK. The sed part is only to prettify the output.

Creating the New Go Module

————————-

Now we have a new Go module (codegen/chuck_norris_api/main.go) with the following content:

“`go

package main

import “net/http”

func main() {

apiUrl := “https://api.chucknorris.io/”

apiResponse, err := http.Get(apiUrl)

if err != nil {

log.Fatal(err)

}

defer apiResponse.Body.Close()

var apiResp struct {

Fact string `json:”fact”`

}

err = json.NewDecoder(apiResponse.Body).Decode(&apiResp)

if err != nil {

log.Fatal(err)

}

fmt.Println(apiResp.Fact)

}

“`

This module contains a main function that gets the Chuck Norris API fact using the Go http package.

Conclusion

———-

In this article, we have shown how to use Codegen to generate a Python and Go SDK for the Chuck Norris API. We have also demonstrated how to use the new modules to get a random Chuck Norris fact. With Codegen, you can easily generate SDKs for your APIs in multiple programming languages, saving you time and effort.