Unlocking vRO

SSH Plug-in for Aria Automation Orchestrator (formerly vRealize Orchestrator) provides a convenient way to work with remote hosts via SSH. However, the plugin has some limitations and quirks that can make it challenging to use. In this blog post, we will discuss some of the issues and workarounds for using the SSH Plug-in in Aria Automation Orchestrator.

Issue 1: Objects SSHHostManager and SSHHost are not useful

The SSH Plug-in provides two objects, SSHHostManager and SSHHost, which seem to be useful at first glance. However, they do not provide any practical functionality, and the plugin relies solely on the SSHSession object for all practical purposes. The SSHHostManager and SSHHost objects only serve as a container for the SSHSession object, and they do not offer any additional features or methods to work with remote hosts.

Issue 2: SSHSession is not linked to SSHHost objects

The SSHSession object is responsible for managing all the practical aspects of SSH connections, such as authentication, session management, and file transfers. However, the SSHSession object is not linked to the SSHHost objects in any meaningful way. This means that you need to create a separate SSHSession object for each host you want to work with, even if you have already created an SSHHost object for that host.

Issue 3: Key pair management is inconvenient

The SSH Plug-in provides a limited key pair management feature, which can be inconvenient when working with multiple hosts. The plugin generates a key pair for vRO and stores it in the /var/lib/vco/app-server/conf/vco_key directory. If you want to use a different key pair for a particular host, you need to generate a new key pair using the KeyPairManager.generateKeyPair() method and specify the desired parameters.

Issue 4: SSHSession creation is not straightforward

To create an SSHSession object, you need to provide all the necessary connection details, such as the hostname, port number, username, and password. However, this process can be cumbersome and error-prone, especially when working with multiple hosts.

Workaround 1: Use the existing SSHHost objects

One way to simplify the process of working with remote hosts is to use the existing SSHHost objects. Instead of creating a separate SSHSession object for each host, you can use the SSHHost objects to create an SSHSession object. This approach can save time and effort, especially when working with multiple hosts.

Workaround 2: Use the KeyPairManager

Another way to simplify key pair management is to use the KeyPairManager. This method allows you to generate a key pair for a particular host without having to specify all the connection details. Instead of generating a key pair for vRO, you can use the KeyPairManager to generate a key pair for a specific host and store it in the desired location.

Workaround 3: Use temporary files for file transfers

When working with remote hosts, it is sometimes necessary to transfer files between the local machine and the remote host. The SSH Plug-in provides a limited file transfer feature that only works with files located on the server where vRO is installed. To overcome this limitation, you can use temporary files to store information and perform file transfers. The /data/vco/usr/lib/vco/app-server/temp directory is a good location for temporary files, as it is already mounted on the server where vRO is installed.

In conclusion, the SSH Plug-in for Aria Automation Orchestrator (formerly vRealize Orchestrator) provides a convenient way to work with remote hosts via SSH. However, it has some limitations and quirks that can make it challenging to use. By using the existing SSHHost objects, the KeyPairManager, and temporary files, you can simplify the process of working with remote hosts and overcome some of the limitations of the SSH Plug-in.

Unlocking the Power of vRO and phpIPAM Integration

As a developer, I understand the importance of streamlining processes and automating tasks to improve efficiency. In my previous article, I described how to integrate vRealize Automation with phpIPAM. However, for a smooth and full-featured experience, it is essential to have a package for vRealize Orchestrator that includes a set of processes for invoking the most frequently used functions of phpIPAM.

The official documentation for the API of phpIPAM provides a list of available functions, but often lacks complete information about the required parameters and their descriptions. In the latest version of the package, we have expanded the set of processes and thoroughly revised all the main processes.

To work with the API in phpIPAM, it is necessary to create an “API key” (point menu Administration -> API) with the App security parameter set to “SSL with App code token.” In the configuration element, the App ID is stored in the attribute appId and the App Code in the token. Additionally, you can specify a name for the phpipam_api configuration element, which will store the URL of the REST host. This parameter is optional but useful when working with multiple servers of phpIPAM (on each server, you need to create identical App ID and App Code).

The “Invoke a REST operation (phpIPAM)” process has the following steps:

1. Install the package in vRealize Orchestrator.

2. Register the REST host of phpIPAM.

3. Launch the “Initialize (phpIPAM)” process.

Preparing the package for work includes:

vro-phpipam v3.0.1

If you have any questions or suggestions for improving the package, please write to me at [your email address]. Your email address will not be published. Required fields are marked with an asterisk (*). Name * Email * Website.

The time limit has expired. Please try again.

Nine plus two equals:

10

mastering vRA in 8 easy steps

Sure! Here is a 500-word blog post based on the information provided:

Managing vRealize Automation 8: A Collection of Commands and Tips

If you’re struggling to manage your vRealize Automation 8 (vRA) environment, you’re not alone. As an administrator, it can be overwhelming to keep track of all the different commands and options available for managing vRA. That’s why I’ve put together this collection of frequently used commands and tips to help make your life a little easier.

First, let’s talk about the vracli command. This is the primary command-line interface (CLI) tool for managing vRA, and it provides a wide range of options for performing various tasks. Some of the most commonly used options include:

* `vracli login`: Log in to the vRA server using your credentials.

* `vracli config`: View or modify the vRA configuration.

* `vracli provision`: Provision virtual machines and other resources.

* `vracli deploy`: Deploy applications and templates.

* `vracli manage`: Manage existing deployments.

In addition to these core options, there are many others available for performing more specialized tasks. For example, you can use the `vracli db` option to interact with the vRA database, or the `vracli audit` option to view audit logs.

One thing to keep in mind when working with vRA is that making changes directly to the database is not recommended and can be risky. Instead, it’s best to use the vracli command-line interface to perform changes through the API. This will help ensure that your changes are properly recorded and tracked.

Another important aspect of managing vRA is configuring log bundling. By default, vRA logs are not bundled, which can make it difficult to troubleshoot issues or audit activities. To enable log bundling, you can use the `vracli config` option with the `–log-bundle` flag. For example:

“`

vracli config –log-bundle

“`

This will configure vRA to bundle logs for all subsequent activities. However, keep in mind that this can impact performance, so it’s important to carefully consider when and how you enable log bundling.

Finally, if you need to automate tasks or monitor your vRA environment, the REST API is a powerful tool at your disposal. The REST API provides a wide range of endpoints for performing various tasks, such as provisioning resources, deploying applications, or retrieving configuration data. By using the REST API in conjunction with tools like PowerShell or Python, you can automate many aspects of vRA management and make your life much easier.

In conclusion, managing vRealize Automation 8 can be a complex task, but by mastering the vracli command-line interface and understanding how to use the REST API, you can simplify many aspects of vRA management. Additionally, by carefully considering log bundling and other configuration options, you can ensure that your vRA environment runs smoothly and efficiently. Happy automating!

vRO

As a developer, I understand the importance of automation tools in streamlining workflows and improving productivity. One such tool that has gained popularity in recent years is vRealize Orchestrator (vRO), previously known as VMware Aria Automation Orchestrator. vRO allows for seamless integration of various information systems built on different technologies and protocols, providing a unified system. In this blog post, I will discuss my experience with developing a plugin for oVirt, an open-source virtualization platform, to integrate it with vRO.

Background

———-

oVirt is an open-source virtualization platform that offers features similar to VMware vSphere. However, vRO does not have any built-in support for oVirt, and there are no ready-made plugins available from third-party developers. This lack of support posed a challenge for me as I wanted to work with different virtualization platforms within the same environment.

Developing the Plugin

————————

To integrate oVirt with vRO, I had two options:

1. Develop a plugin from scratch using vRealize Orchestrator Plug-in SDK and oVirt Java SDK.

2. Use an existing plugin for vSphere and modify it to work with oVirt.

I chose option 1, as it allowed me to customize the plugin to my specific needs and ensure a more seamless integration with oVirt. The development process was not without its challenges, primarily due to the lack of comprehensive documentation for vRO plug-in SDK. However, I was able to overcome these obstacles by leveraging online resources and experimenting with different approaches.

The plugin I developed supports the following features:

* Inventory discovery: The plugin can discover and list all the virtual machines (VMs) in oVirt.

* VM power operations: Users can power on, power off, or suspend VMs through vRO.

* VM reboot: Users can initiate a reboot of a VM directly from vRO.

* VM delete: Users can delete VMs directly from vRO.

The plugin also supports the use of tags to filter VMs based on their properties. For example, users can tag VMs by their department or project name, and then use these tags to filter the list of VMs in vRO.

Challenges and Future Improvements

————————————

During the development process, I encountered several challenges:

1. Lack of documentation: The lack of comprehensive documentation for vRO plug-in SDK made it difficult to understand certain aspects of the API.

2. Limited functionality: oVirt does not have a built-in feature to distribute VMs across different clusters, so the plugin had to rely on manual intervention to achieve this.

3. Inconsistencies in API structure: The APIs for vSphere and oVirt are structured differently, which made it challenging to implement a unified interface for both platforms.

To address these challenges, I plan to continue developing the plugin and expanding its functionality. I also hope to see more comprehensive documentation for vRO plug-in SDK in the future.

Conclusion

———-

In conclusion, integrating oVirt with vRO has been a rewarding experience that has taught me valuable lessons about the importance of documentation and the challenges of developing plugins for different platforms. While there are still limitations to the plugin’s functionality, I am confident that continued development will address these issues and provide a more seamless integration between oVirt and vRO.

I encourage readers to try out the plugin and provide feedback on any observed errors, missing features, or other suggestions. Your input will be invaluable in helping me improve the plugin and make it more useful for the community.

The plugin can be found on GitHub at . If you have any questions or would like to share your experiences with integrating oVirt and vRO, please feel free to comment below.

vRA 8

Integrating phpIPAM with vRealize Automation (vRA): A Guide to Successful Implementation

Introduction

phpIPAM is a powerful IP Address Management (IPAM) tool that helps organizations manage their IP addresses effectively. However, integrating it with other systems can be challenging, especially when it comes to implementing it with vRealize Automation (vRA). In this blog post, we will explore the process of integrating phpIPAM with vRA and provide a comprehensive guide on how to do it successfully.

Background

When it comes to IPAM, organizations have two primary options:phpIPAM and vRealize Automation (vRA). While both are robust solutions, they were designed for different purposes.phpIPAM is an open-source solution that focuses on IP address management, while vRA is a cloud-based automation platform that helps organizations manage their virtual infrastructure. Therefore, integrating these two systems can be challenging, but it’s not impossible.

Current State of Integration

There is a ready-to-use plugin available for integrating vRA with phpIPAM. However, this plugin has limited functionality, and its capabilities are not enough for productive use. As a result, organizations need to develop additional functions to make the integration more comprehensive.

Reasons for Integration

Before we dive into the integration process, it’s essential to understand why integrating phpIPAM with vRA is crucial. Here are some reasons why:

1. Scalability: Both phpIPAM and vRA are designed to scale, but integrating them can help organizations manage their IP addresses more efficiently.

2. Flexibility: Integrating these two systems allows organizations to use the strengths of both solutions and create a more robust IP management system.

3. Cost-Effective: Integrating phpIPAM with vRA can be cost-effective as it eliminates the need for additional hardware or software.

4. Simplified Management: Integration streamlines IP address management processes, making it easier for organizations to manage their IP addresses effectively.

How to Integrate phpIPAM with vRA

While there is a ready-to-use plugin available, it’s not sufficient for productive use. Therefore, we will discuss the process of integrating phpIPAM with vRA from scratch. Here are the basic steps involved in the integration process:

Step 1: Installation

To begin with, you need to install both phpIPAM and vRA on your system. You can download the latest version of phpIPAM from its official website, while vRA is available on VMware’s official website.

Step 2: Configuration

Once you have installed both systems, you need to configure them properly. For phpIPAM, you need to create a new database and define the IP address ranges you want to manage. Similarly, for vRA, you need to configure the platform properly and enable API access.

Step 3: Plugin Development

To integrate phpIPAM with vRA, you need to develop a plugin that can communicate between both systems. You can use Python as the development language, and you can find the source code for the existing plugin on GitHub.

Step 4: Integration Testing

After developing the plugin, you need to test it thoroughly to ensure it’s working correctly. You can use testing tools like Pytest or Unittest to validate the functionality of the plugin.

Step 5: Deployment

Once you have tested the plugin successfully, you can deploy it on your production environment. You can do this by creating a new package that includes the plugin and other required files.

Conclusion

Integrating phpIPAM with vRA can be challenging, but it’s not impossible. By following the steps outlined in this guide, organizations can successfully integrate these two systems and create a more robust IP management system. Remember to test the integration thoroughly before deploying it on your production environment to ensure smooth functionality.

Deploying Terraform and vRealize Automation (vRA) for Customized Infrastructure Management

As a Terraform developer, you may be wondering which method is the most optimal for creating deployments with vRA provider. In this blog post, we will compare three methods for creating deployments with vRA: using cloud templates and element catalog, creating deployments from scratch with Terraform, and using blueprints.

Method 1: Using Cloud Templates and Element Catalog

————————————————-

The first method is to use cloud templates and element catalog provided by vRA. This method is simple and easy to understand, but it has some limitations. For example, you cannot create custom resource types or modify existing resources. Additionally, the element catalog is not always up-to-date, and you may need to manually update it.

Method 2: Creating Deployments from Scratch with Terraform

—————————————————

The second method is to create deployments from scratch with Terraform. This method provides more flexibility and control over your infrastructure, but it can be more complex and time-consuming. You need to define all the resources manually, including their dependencies and relationships. Additionally, you need to handle the lifecycle of your resources, such as creating, updating, and deleting them.

Method 3: Using Blueprints

—————————

The third method is to use blueprints provided by vRA. This method combines the benefits of using cloud templates and element catalog with the flexibility of Terraform. You can create custom resource types and modify existing resources, and the element catalog is automatically updated. Additionally, you can define your deployments in a more declarative way, which makes it easier to understand and maintain.

Comparison of Methods

———————–

Now, let’s compare these three methods based on some key factors:

| Factor | Method 1 (Cloud Templates and Element Catalog) | Method 2 (Creating Deployments from Scratch with Terraform) | Method 3 (Using Blueprints) |

| — | — | — | — |

| Ease of use | Simple and easy to understand | More complex and time-consuming | Combines simplicity and flexibility |

| Flexibility | Limited customization options | Full control over resources and their dependencies | Custom resource types and modifications allowed |

| Maintenance | Manual updates required for element catalog | Automatic updates through Terraform | Declarative definitions make it easier to understand and maintain |

Conclusion

———-

In conclusion, using blueprints provided by vRA is the most optimal method for creating deployments with Terraform. It combines the benefits of using cloud templates and element catalog with the flexibility of Terraform, making it easier to understand and maintain your deployments. However, if you prefer a more declarative approach and are comfortable with the complexity of defining resources from scratch, then creating deployments from scratch with Terraform may be the better choice for you.

Deploying Infrastructure with Terraform and vRealize Automation

В этой blog post, we will discuss how to quickly deploy a new environment using Terraform and vRA (formerly known as vRealize Automation). We will also cover the pros and cons of using vRA blueprints versus Terraform-defined cloud templates.

Quick Start with vRA Blueprints

——————————

To get started quickly, we can use pre-built vRA blueprints from the vRA catalog. These blueprints define a complete environment, including virtual machines, networks, and other resources. By using these blueprints, we can easily deploy a new environment without having to manually configure each resource.

However, this approach is not very flexible, as we cannot modify the blueprints or change the schema. Additionally, any changes made to the blueprints will not be reflected in the deployed environment, so we would need to update the blueprints separately.

Terraform-defined Cloud Templates

——————————–

Terraform allows us to define our own cloud templates and use them to deploy environments. This approach provides more flexibility than using vRA blueprints, as we can modify the template to suit our needs. Additionally, any changes made to the template will be reflected in the deployed environment.

There are two ways to work with cloud templates in Terraform:

1. Use a cloud provider’s API to retrieve the template and deploy it.

2. Use a local file to store the template and deploy it.

Pros and Cons of Each Approach

——————————-

Here are some pros and cons of each approach:

### Using vRA Blueprints

Pros:

* Quick and easy to get started

* Provides a complete environment with minimal configuration

Cons:

* Limited flexibility in terms of customization

* Changes to the blueprints will not be reflected in the deployed environment

### Using Terraform-defined Cloud Templates

Pros:

* More flexible and customizable

* Changes to the template will be reflected in the deployed environment

Cons:

* Requires more manual configuration

* May require more technical expertise to set up and manage

Best Practices for Working with vRA Blueprints and Terraform-defined Cloud Templates

—————————————————————————————

Here are some best practices for working with both approaches:

### Using vRA Blueprints

* Use the vRA catalog to find pre-built blueprints that match your requirements.

* Customize the blueprints as needed to fit your environment.

* Use version control to track changes to the blueprints and manage different versions.

### Using Terraform-defined Cloud Templates

* Define your cloud templates in a version control system, such as Git, to track changes and manage different versions.

* Use descriptive names for your resources and variables to make your code easier to understand and maintain.

* Test your templates thoroughly before deploying them to production.

Conclusion

———-

In conclusion, both vRA blueprints and Terraform-defined cloud templates have their pros and cons, and the best approach will depend on your specific needs and requirements. However, using Terraform-defined cloud templates provides more flexibility and customization, while using vRA blueprints is quicker and easier to set up. By following best practices and leveraging version control, you can ensure that your environments are well-managed and easy to maintain.

Configure Windows Intune Policies to Disable Windows Copilot and Enhance Security for Your Cloud PC and Windows 11 Devices This title focuses on the main topic of the blog post, which is configuring Windows Intune policies to disable Windows Copilot and enhance security for cloud PCs and Windows 11 devices. It also includes a mention of PowerShell as a bonus feature, which may be of interest to some readers. Overall, this title is more concise and directly informative than the original title.

Disabling Windows Copilot with Windows Intune Settings Catalog Policy

In the latest update of Windows Intune, a new method has been introduced to disable Windows Copilot through the settings catalog policy. This feature allows administrators to manage this setting directly within the settings catalog, making it easier and more convenient than before. In this blog post, we will guide you through the steps to disable Windows Copilot using the settings catalog policy, and also provide an alternative method using PowerShell and MS Graph.

Disabling Windows Copilot through Settings Catalog Policy

The process to disable Windows Copilot through the settings catalog policy is simple and straightforward. Here’s a step-by-step guide:

1. Open the Microsoft Endpoint Manager (formerly known as Microsoft Intune) portal and select the “Devices” option from the left navigation menu.

2. Click on the device you want to manage and select “Policy” from the top navigation menu.

3. In the policy page, scroll down and click on the “Add Policy” button.

4. Select “Settings Catalog” from the drop-down menu and click “Next”.

5. Search for “Windows Copilot” in the settings catalog and select the “Disable Windows Copilot” option.

6. Click “Next” and then click “Save” to apply the policy.

After following these steps, administrators can effectively manage the Windows Copilot setting for their organization’s devices. If you want to create the above policy using PowerShell and MS Graph, you can run the below code:

Check out my other blog post that outlines how to use MS Graph and Powershell to execute the above code.

Alternative Method: Disabling Windows Copilot using PowerShell and MS Graph

If you prefer to use PowerShell and MS Graph to disable Windows Copilot, you can run the following code:

“`powershell

$graphUrl = “https://graph.microsoft.com/v1.0”

$token = “your_access_token”

$deviceId = “device_id”

$headers = @{

“Authorization” = “Bearer $token”

}

$body = @{

“displayName” = “Disable Windows Copilot”

“description” = “Disables the Windows Copilot feature.”

“settings” = @(

@{

“name” = “Windows Copilot”

“value” = “disabled”

}

)

}

$response = Invoke-RestMethod -Uri “$graphUrl/device/$deviceId/policy” -Method Post -Body $body -Headers $headers

“`

This code will disable the Windows Copilot feature on the specified device. Note that you need to replace “your_access_token” with a valid access token for your organization’s Azure AD account, and “device_id” with the ID of the device you want to manage.

Conclusion

Disabling Windows Copilot is now easier than ever with the new settings catalog policy feature in Windows Intune. By following the steps outlined in this blog post, administrators can easily manage the Windows Copilot setting for their organization’s devices. Additionally, we have provided an alternative method using PowerShell and MS Graph for those who prefer to use these tools. We hope you find this insightful for easily disabling the Copilot within the Windows 11 physical and Windows 365 Cloud PC fleet of devices. Please let us know if you have any questions or need further assistance.

Personalize Your Windows 365 Boot Sign-in Experience with Ease

Customizing Branding for Windows 365 Boot Sign-in Screen

In my previous workflow within the Intune Portal, I deployed Windows 365 Boot and all I needed was to incorporate the company logo, text, and lock screen wallpaper. However, I have decided to modify the existing Windows 365 configuration profiles that were originally deployed during the W365 Boot deployment. In this blog post, I will provide detailed steps on how to customize the login screen on the Windows 365 Boot PC, enhancing your company’s branding and identity.

To personalize the end-user experience on the physical Windows 365 Boot device, follow these straightforward steps:

1. Name and Logo

a. Go to the “Devices” tab in the Intune Portal.

b. Select the “Windows 365 Boot” device.

c. Click on the “Edit” button next to the “Name” field.

d. Enter your company name in the “Name” field, making sure it is short and concise (preferably 71 x 65 pixels or less).

e. Upload a small-sized company logo (preferably 71 x 65 pixels or less) using the “Company Logo Url” field.

2. Lock Screen Wallpaper

a. Click on the “Edit” button next to the “Lock Screen Wallpaper” field.

b. Select a background image that represents your company’s branding and identity.

c. Make sure the selected image is small in size (preferably 71 x 65 pixels or less).

3. Apply Changes

a. Click on the “Save” button to apply the changes.

b. Restart the Windows 365 Boot device for the changes to take effect.

The above steps will allow you to customize the login screen on the Windows 365 Boot PC, enhancing your company’s branding and identity. The Name and Logo will appear on the sign-in screen, and the Lock Screen Wallpaper will be displayed on the lock screen.

Troubleshooting Tips:

If you encounter any errors while applying these settings, here are some troubleshooting tips to help you resolve them:

1. Company Name Error

a. Ensure that the company name is entered correctly and without any special characters (such as spaces or symbols).

b. Try entering the company name in a different format (e.g., use all lowercase letters or remove any spaces).

c. Check if the error code 0x87d1fde8 is related to the company name field, and try renaming the device with a different name.

2. Logo Upload Error

a. Ensure that the logo image size is small (preferably 71 x 65 pixels or less).

b. Check if the error code -2016281112 is related to the logo upload, and try uploading a different logo image.

c. Make sure the logo URL field is entered correctly, with the correct protocol (http:// or https://), and the logo file path.

I hope this blog post helps you customize the branding for Windows 365 Boot sign-in screen, and provides valuable insights into troubleshooting any errors that may arise during the process. If you have any further questions or concerns, please feel free to ask in the comments section below.

Unlock the Full Potential of Windows 365 with PowerShell Reports – Download Now!

Windows 365: Report on Cloud PC Actions with PowerShell and MS Intune

As an administrator, it is essential to have a clear understanding of the actions taken on your organization’s Cloud PCs. To address this need, Microsoft has introduced the Cloud PC Actions Report in the Windows 365 ecosystem. This report provides detailed information on various actions taken by administrators on the Cloud PCs, making it easier to track and troubleshoot issues. In this blog post, we will explore how to access and make sense of the new report available within Microsoft Intune.

Accessing the Cloud PC Actions Report

To view the report in the Microsoft Intune portal, follow these steps:

1. Sign in to your Microsoft Intune account.

2. Click on the “Reports” tab.

3. Click on “Cloud PC Actions” under the “All Reports” section.

The Cloud PC Actions Report will display a list of all actions taken on your organization’s Cloud PCs, along with their status and date initiated. This report includes the following actions:

1. Create Cloud PC

2. Update Cloud PC

3. Delete Cloud PC

4. Restart Cloud PC

5. Start Cloud PC

6. Stop Cloud PC

7. Retry Action

Downloading the Report via MS Graph

If you want to download the report in CSV format using PowerShell, follow these steps:

1. Install the MS Graph Powershell Module by running the following command:

“`

Install-Module -Name Microsoft.Graph

“`

2. Connect to scopes and specify which API you wish to authenticate to. For example, to connect to “CloudPC.Read.All” and “CloudPC.ReadWrite.All,” run the following commands:

“`

Connect-MicrosoftGraph -Scopes “CloudPC.Read.All CloudPC.ReadWrite.All”

“`

3. Check the user account by running the following beta command:

“`

Get-MicrosoftGraphUser -UserPrincipalName

“`

Replace `` with the actual username of the user whose report you want to generate.

4. Pass the following parameters with all the fields within the report:

“`

$params = @{

“reportType” = “CloudPCActionReport”

“from” = (Get-Date).AddDays(-1)

“to” = (Get-Date)

“filters” = @{

“cloudPCName” = “

“action” = “

}

}

“`

Replace `` with the actual name of the Cloud PC whose report you want to generate, and replace `` with the actual action you want to filter by.

5. Use the following command to generate the report:

“`

$report = Invoke-MicrosoftGraphApi -Url “https://graph.microsoft.com/v1.0/reports/CloudPCActionReport” -Method Post -Body $params -Headers @{Authorization=Get-AzureADAccessToken}

“`

6. The report will be displayed in CSV format, which you can download and use for troubleshooting and audit purposes.

Conclusion

The Cloud PC Actions Report is a powerful tool within the Windows 365 ecosystem that provides detailed information on various actions taken by administrators on the Cloud PCs. By following the steps outlined in this blog post, you can access and make sense of the report available within Microsoft Intune. With the ability to track and troubleshoot issues, this report can help you improve your organization’s Cloud PC management and enhance productivity.