Using Text Analytics Key Phrase Cognitive Services API from PowerShell

There are abundant sample for executing Text Analytics Cognitive Services API from C# and other languages like node.js. While searching, I did not find any examples of consuming Text Analytics API through Powershell and this blog is all about it.

In this blog post, I am going to show how to use Text Analytics Key Phrase Cognitive Services API to extract key phrases from a given sentence or paragraph. Cognitive Services are REST api that can be invoked by any language and from any platform. They are build using industry standards and message exchange happens through JSON payloads.

It is important to understand that Cognitive Services are services provided as PaaS service from Azure. You need a valid Azure subscription and need to provision Cognitive Services resource in a resource group. While provisioning this resource, Text Analytics API service should be chosen as API type. Text Analytics API service contains a set of REST api and one of them is related to Key Phrase extraction. The same has been shown in Figure 1.

Cognitive Service Text Analytics

After the Service is provisioned, it generates a set of unique key associated with the service. Any client that wants to invoke and consume this instance of cognitive Services should send this Key with request. The Service will validate the key and if it matches the key it holds will allow successful execution of the request.

Now that the service is provisioned, it’s time to write the client using Powershell.


Open your favorite Powershell console and write the script shown next. The code is quite simple and few statements.

# uri of the KeyPhrases api related to OCR

$keyPhraseURI = “”

# key to identify a valid request. You should provide your own key

$apiKey = “xxxxxxxxxxxxxxxxxxxxxxx”

# preparing JSON document as message payload

$documents = @()

$message = @{“language” = “en”; “id” = “1”; “text” = “I had a wonderful experience! The rooms were wonderful and the staff were helpful.” };

$documents += $message

$final = @{documents = $documents}

$messagePayload = ConvertTo-Json $final

# invoking Key Phrase rest api

$result = Invoke-RestMethod -Method Post -Uri $keyPhraseURI -Header @{ “Ocp-Apim-Subscription-Key” = $apiKey } -Body $jsbod -ContentType “application/json” -ErrorAction Stop

The code is well commented but to understand it better the first line declares a variable to hold the key required for identifying with Cognitive Services Identity provider. You should provide your own key. The url to Text Analytics Key Phrase REST Api.

Next set of statements are preparing the JSON message payload that should be passed to the REST api as part of request body. A hashtable is declared containing language, id and text key value pairs. It is converted into JSON format and the last line invokes the REST api using Invoke-RestMethod cmdlet passing in the Uri, header containing custom item, the body and content type. It is important that header must contain Ocp-Apim-Subscription-Key custom header with API key as it value. The request will fail if this header is missing or it contains invalid key.

The response object is a JSON object containing the text extracted by Text Analytics service.

Executing $result.documents.keyPhrases on the console will return the text extracted by Text Analytics service as shown next

PS C:\Users\rimodi> $result.documents.keyPhrases


wonderful experience


Hope you liked the blog post. Please send your feedback and if you would like to stay connected, you can connect through twitter @automationnext and LinkedIn @

Happy coding and Cheers!



Powershell Desired State Configuration Partial Configurations without ConfigurationID


One of the most awaited and interesting features of WMF 5 DSC is Partial Configuration. Until WMF 5 DSC, it was difficult to split large configuration into multiple smaller configurations. Partial Configuration enables us to split configuration into multiple smaller configuration fragments across multiple files. Partial configurations are implemented exactly the same way as any general DSC configuration. It is the responsibility of LCM on a target server to combine all the configuration fragments into a single configuration and apply it.

The configuration fragments and files doesn’t indicate in any way the existence of Partial Configuration. Each Partial Configurations is complete in itself and can be applied independently as a configuration to any server. Partial Configuration are deployed on pull server following a sequence of steps and the target node LCM is configured to download these partial configurations, combine them and apply it on the host. The magic of Partial Configuration is conducted by the LCM.

Partial Configurations works with DSC Pull, push as well as Mixed mode. In this blog we will delve deeper into the realms of partial configuration in pull mode. It means that LCM of servers in a network should be configured to pull configurations from Pull Server (web server or SMB Share) and it should be able to identify the configurations distinctly on these pull servers.

All preview releases of WMF 5 had partial configurations available as a feature but they worked using one of the properties of LCM known as ConfigurationID whose value is a GUID. With the RTM release, Partial configurations works with ConfigurationID but at the same time it also works when ConfigurationID is not provided. This is a huge leap from previous releases as now there is no need to remember Configuration IDs as part of DSC Configuration names. Now, the configurations can be referred just by their name. It is much more natural and easier to use and manage.

Benefits of Partial Configuration

Some of the benefits of Partial Configurations are

  1. Multiple authors can author configurations independently and simultaneously for servers in a network.
  2. Incremental configurations can be applied to servers without modifying any existing configurations.
  3. Modular authoring of configurations.
  4. Removed dependencies on single MOF file. This was the case in DSC v1 where only one MOF file was allowed and applied to a server at a given point of time. Newer configuration (MOF) would replace the current configuration in DSC v1.

Steps for using Partial Configuration

For making Partial Configuration work in technical preview, following steps should be performed.

  1. Creation of Pull Server
  2. Configuring LCM MetaConfiguration of servers in the network.
  3. Authoring Configurations
  4. Deploying Configurations on the pull server.

We will not go into details of creating a pull server. I would be covering that in a separate blog. We will assume that pull servers are already deployed and configured for the purpose of this blog.

There can be more than one pull server within an enterprise and so to make the example shown in this blog more realistic would assume that there are two pull servers. The names of the pull server are marapwkvm0 and SQLWitness0. The name of the target node which will pull partial configurations from these two server is marapdovm. We also have two configurations each deployed to one of the pull servers. The LCM of a target machine (marapdovm) will be configured with these two pull servers and configurations.

LCM Configuration

Let’s now focus towards configuring a server’s LCM Configuration. Specifically, we need to configure

  1. RefreshMode with value of “Pull
  2. Optionally but desirable ConfigurationMode value to “ApplyandAutoCorrect” to keep the server in expected state. Also, from blog perspective, we will be able to see something tangible.
  3. RefreshMode is set to Pull mode within the settings block. This will make all partial configuration use pull mode.
  4. Multiple ConfigurationRepositoryWeb resource instances each representing a pull server. The url of the pull server on marapwkvm0 is https://marapwkvm0:8090/PSDSCPullServer.svc/ running on port 8090. The url of the pull server on SQLWitness0 is https://sqlwitness0:8100/PSDSCPullServer.svc/ and it is running on port 8100.
    • Each pull server is configured with a RegistrationKey. This is a shared key between the target node and pull server. The RegistrationKey for respective pull servers should be provided within this block. It has been whitened out due to security reasons. You should put your own RegistrationKey for these values.
    • ConfigurationNames is a new property added to ConfigurationRepositoryWeb. This property determines the configurations that should be downloaded and applied on the target node. It is an array property and can contain multiple configuration names. The names should match exactly to the deployed configuration.
  5. Multiple PartialConfiguration resource instances each representing a configuration on a pull server. On pull server marapwkvm0, a configuration named “IISInstall” is deployed whose whole purpose is to install IIS while on pull server SQLWitness0 another configuration named “IndexFile” is deployed whose purpose is to generate a .htm file with some content. The names of the partial configuration should match the configurations available on the pull server as well as the names provided as values to “ConfigurationNames” property of ConfigurationRepositoryWeb.

The entire code for LCM configuration is shown here. This code should be run on target node. In our case it is marapdovm.


The above code should be executed only after the partial configurations are authored and deployed on the respective pull servers.

Just to ensure repeating it again. PartialConfiguration block defines configuration fragments. Two partial configurations “IISInstall” and “IndexFile” are defined. “IISInstall” configuration is available on IISConfig pull server while “IndexFile” configuration is available on FileConfig pull server. Important to note are the names of the partial configurations because they should match exactly with the names of the configurations on pull server. You will see next that “IISInstall” configuration is authored and available on PullServer1 and “IndexFile” configuration is available on PullServer2. “ConfigurationSource” property attaches the Pull Server to the partial configuration.

IISInstall Configuration

This is a simple configuration responsible for installing IIS (Web-Server) on a server using WindowsFeature resource. Execution of the configuration would result in generation of MOF file. A corresponding checksum file is also generated for the mof file. Both the files – mof and checksum is copied over to the ConfigurationPath folder which in my case is “C:\Program Files\WindowsPowershell\DSCservice\Configuration”. The configuration uses localhost as node name however while copying the files, they are renamed same as the configuration name.IISInstall-1

New-DSCChecksum command is responsible for generating the checksum for the configuration mof file. Both IISInstall.mof and IISInstall.mof.checksum should be available now at “C:\Program Files\WindowsPowershell\DSCservice\Configuration” folder on marapwkvm0 server.

IndexFile Configuration

This is again a simple configuration responsible for creating a .htm file at C:\inetpub\wwwroot folder on server using WindowsFeature resource. Execution of the configuration would result in generation of MOF file. A corresponding checksum file is also generated for the mof file. Both the files – mof and checksum is copied over to the ConfigurationPath folder which in my case is “C:\Program Files\WindowsPowershell\DSCservice\Configuration. The configuration uses localhost as node name however while copying the files, they are renamed same as the configuration name.


Both IndexFile.mof and IndexFile.mof.checksum should be available now at “C:\Program Files\WindowsPowershell\DSCservice\Configuration” folder on SQLWitness0 server.

Now, it time to move to the target node and apply the LCM configuration that we authored earlier.
Execute the below command to apply the configuration related to LCM on the target node.

Set-DscLocalConfigurationManager -path “C:\PartialConfigurationDemo” -Force -Verbose.

Below is the output we should be able to see


After the LCM configuration is modified for making partial configurations work, it’s time to apply the configuration by asking LCM to pull the configuration from pull servers.

Execute the below command to pull, store and combine the configurations on the target node.

Update-DscConfiguration -wait -Verbose

Below is the output we should see


The above command will download the configurations, combine them and put them into pending state. It will not apply immediately. When the LCM is invoked again depending on the value of ConfigurationModeFrequencyMins, the configuration will be applied based on the value of ConfigurationMode. In our case, it will apply the configuration and also auto correct it.

To execute the configuration immediately, run the following command

Start-DscConfiguration -UseExisting -Wait -Force -Verbose

Below is the output we should see


VOILA!!! You can see that both the configuration containing their resource (IIS and IndexFile) are applied to the server.

We have applied partial configurations to a node by referring to the configurations by their names instead of using ConfigurationID GUIDs.
This is just the tip of the beginning and stay tuned for more detailed information.

If you like this post please share and if you have any feedback please share that too.

In next post will get in deeper with more Partial Configuration on WMF 5 RTM.


Installing WMF 5.0 April preview release

Before installing WMF 5.0 April preview release, we have to first download it. WMF 5.0 April preview release can be downloaded from WMF 5.0 Download

Also, before installing WMF 5.0 April preview release, remember to save all your files and close applications as it would ask for restart of the server.

Installing WMF 5.0 April release is quite simple. However, ensure that the following updates are uninstalled from the Operating system before installing it.

  1. KB3055381
  2. KB3055377
  3. KB2908075

Also, there are different installers for different operating system. Based on your operating system and its processor type, proper installer should be chosen.

Windows Server 2012 R2, Windows 8.1 Pro, and Windows 8.1 Enterprise

  1. x64: WindowsBlue-KB3055381-x64.msu
  2. x86: WindowsBlue-KB3055381-x86.msu

Windows Server 2012

  1. x64: Windows8-KB3055377-x64.msu

Windows 7 SP1 and Windows Server 2008 R2 SP1

  1. x64: Windows6.1-KB2908075-x64.msu
  2. x86: Windows6.1-KB2908075-x86.msu

In this case, I am installing on 64 bit Windows Server 2012 R2 machine and so I choose “x64: WindowsBlue-KB3055381-x64.msu” installer.

Double click on the installer will start the process of installing WMF 5.0 April preview release.


Click on the Open button. It will further ask for confirmation to install. click on “yes” button.

Accept the EULA and the process of installation will start. It will approximately take a minute to install.



It will ask for restart of the server. Click on “Restart Now” button”.

After restart, you can go to control Panel | Program and Features | view Installed Updates to check the previous installation.


You can also view successful installation through Powershell using the Get-Hotfix cmdlet as shown below.


You should be able to see the below Powershell and WSMan versions as well.


Hope you enjoyed this post..


Azure Site Recovery: Hyper-V to Azure part – 6

In this series of articles, I would show how to make Azure Site Recovery work with Hyper-V- step by step.

This is part 6 of the series.

Now, it’s time for actual failover. There could be planned or unplanned failover. The difference between planned and unplanned failover is that in planned failover we shut down the source virtual machine manually and start the process of failover whereas in unplanned failover we just start or power up the virtual machine on the target datacenter. Azure site recovery provides both the options to us.


In this article, we will see how the unplanned failover works.

Click on Test Failover | planned Failover menu. This should pop out a window confirming the direction of failover which in this case is from Hyper-v to Azure. It would also ask whether we want to shut down the source (hyper-v) virtual machine and synchronize the target with latest changes. Select the checkbox for shutting down the source virtual machine and synchronizing latest updates as shown below and click on the complete button.


The process of unplanned failover will start and executes a number of steps as shown below.


The source virtual machine on on-premise is shut down automatically by the azure site recovery agent.


The progress updates of the tasks should be completed. The below screen shows that failover is in progress and tasks before failover are complete.


A new Virtual machine same named as on-premise virtual machine is created in a new cloud service.


If now if we open http endpoint with port 80 on the newly created virtual machine, I should be able to browse the same start.htm file and it should still reflect my name on that page.


This shows that Azure site recovery has been able to take care of my applications and services by making them available at time of disaster recovery.

As last step for failover, we have to commit the failover by clicking on the commit button as shown below. It will ask for confirmation. Click yes for the same.


Now, if we have to failback our virtual machine back to our on-premise datacenter, we should navigate back to the virtual machine in the protection group, select it and click on Failover button.


Click on Planned failover. This is because failback are and should always be planned.


On the resultant window, the failover direction is shown. Select appropriate radio button depending upon whether you want to synchronize data before failover or during failover. We have chosen before failover and click on the complete button.


This will start the process of failback. The steps to be performed for failback are shown below.


After reaching and completing the step “Monitoring data synchronization”, it was ask us for the completion of the failover. We will go to jobs section and select the job and click on “Complete Failover” to complete the failover.


Azure hosted virtual machine DRVM would be shut down.


The failback replication would be initiated.


The on-premise virtual machine is brought back to life by switching it on. The azure Virtual machine, cloud service, Storage container and VHD blobs are deleted.


And finally the entire process should complete successfully as shown below.


With this failback, we are covered the entire circle and are back to the same situation where we started from however the difference is that a disaster happened, the virtual machine was provisioned on Azure and where the on-premise datacenter was back to life, we failed back the virtual machine on it.

Now, it’s time to look at Recovery services in Azure site recovery.

The failover we did till now were manual. We can also automate the entire process. This is where the recovery plans help us. They can orchestrate the entire recovery by orderly executing tasks in a step where each step can comprise of complex workload.

Goto Recovery Services | ProductionVault | Recovery Plans | Create Recovery Plan


Provide name, source and target as shown below.


Select the virtual machines for recovery plan and click on complete button


The end result should look like below.


We can further customize the recovery plan by attached scripts to be executed before and after shutdown of the virtual machines. We can group virtual machines as well. This is very important in scenarios where you would like to shut down domain virtual machines before shutting down active directory.

With this We conclude this series on Azure Site Recovery Hyper-V to Azure Disaster Recovery.

Hope you enjoyed the series!