Series: Desired State Configuration – Part 2 : Configuration internals

This is the second part of the series Desired State Configuration.

In this post we will look into some of the internal details about the configuration we created in the previous post.

In the last post, we talked about the some of the fundamentals of desired state configuration. We talked about what DSC is all about and a sample DSC script and the way Powershell has been extended to include the capabilities of DSC.

DSC is semantically equal to a Powershell function not only in terms of invoking it but it is also treated as a function. To prove it run the following command with Powershell ISE. Here “EnableFeatures” is the name of the configuration created in the earlier blog post


Within the output from the above command, look for the key called PSDrive, PSPath and PSProvider.


 The values for all the above three keys reflect that the created configuration called “EnableFeatures” reflect that indeed the configuration is nothing more than a function.

Also, if you change the drive and change it to function drive, the configuration “EnableFeatures” would be visible within the list of functions. To prove type the following command within Powershell ISE.


The prompt would change to


Type Dir in Powershell ISE.

Within the result, “EnableFeatures” would be available within the list of functions obviously with commandtype of type “Configuration”.


Now, let’s focus towards MOF files that are generated when the Configuration is invoked. We will look into details about how to invoke the configuration in future post but in short invoking a configuration with Powershell ISE is nothing more than dot-sourcing the configuration file and calling the configuration by name.

Management Object Format (MOF) files are generated when Configuration is invoked.

A MOF file is generated for each Node within the configuration file.

We have saved the Configuration as Test.ps1 at C:\


We will then dot-source the powershell script such that “EnableFeatures” configuration is available in the global scope.


Once the script and its artifacts are loaded, “EnableFeatures” configuration should be invoked.


And the following is the output.


In the above image, there are few important information provided like

A directory created with the name of the configuration called “EnableFeatures” within the user’s local directory.

Within this directory 2 MOF are generates one for each target server. In this the example they are “WIN-PETR4TD7LAA” and “SCR2i”.

The .mof files looks like below.


The important thing to note here is that the information is collected from the Configuration file and this MOF file is created. This MOF file is needed for pushing the configuration information to the target server.

Powershell adds a new Command type called Configuration to differentiate between function and Configuration. When you run the above Get-command command, there is another key-value pair called CommandType available as output which reflects that it is of type “Configuration”.


In next post, we will look into end to end process of executing a DSC configuration file.

Hope you are finding these posts useful.



Series: Desired State Configuration – Part 1 : Introduction

Desired State Configuration (DSC) is one of the most important and powerful feature of Powershell 4.0.  The primary objective of DSC is two fold

  1. To evaluate the current state of the environment
  2. To bring the environment back to the predefined configuration

DSC consists of resources that can be tracked for above purposes. The resources could be anything that you want to configure within the Operating system like processes, services, files etc. Some of the out-of the box resources include

  1. Group
  2. User
  3. Script
  4. Service
  5. Registry
  6. Process etc.

Also, there are two different models through which DSC tries to achieve its objectives

  1. Push model
  2. Pull Model

In Push Model, the Server designated as desired state configurator tries to push the configuration information to the designate target servers where as in Pull model the target servers tries to connect to a designated server to get its environment checked and reconfigured if there are deviations.

Having both the models within DSC architecture helps in managing different heterogeneous environments that might just be not possible simply with only having the push model.

In this Series we would understand more details about DSC as well as create a new type of resource. We will also use this resource to configure the target environment.

In the first part of this series we will first create a new Configuration that would describe the expected configuration for the target servers.

Though the configuration is written within a Powershell host, it does not look much like Powershell. It looks more like a nested block of name value pairs. In version 4.0 of Powershell, new capabilities has been added in terms of newer keywords that help in writing the configuration file.

Also, newer design time capabilities been added into Powershell ISE that can detect configuration and figure out errors if any. One of the errors that I have come across has been that while creating new DSC resources, if the name of the resource is same as that of one that already exists, red wriggled lines starts appearing on the ISE.

Another capability added by Powershell is that while executing the configuration file, instead of generating output on the host, it generates a folder at the current user’s home directory with the same name as the name of the configuration and creates one Management Object Format (MOF) file for each node. This MOF file is generated by Powershell that would eventually be used for configuring the target servers and is named with name of the Node.

Therefore you can see that a lot of changes and extensions have been done within the overall Powershell environment to accommodate DSC.

Another point to note is that DSC is dependent on WinRM for pushing configuration information to the target servers. You will find WMI namespace available within Root/Microsoft/Windows/Desiredstateconfiguration and all the related classes.

Even when you defined your resources, these gets added to this namespace.

Now, let’s see a sample Configuration written in Powershell ISE

Sample Configuration

Semantic Configuration

In the above sample, we can see there are some new keywords that makes the configuration possible.

The first top image is the actual configuration file and the bottom second image is for reference to understand the semantics of a configuration file.

Configuration defines the overall declarative script. It acts as the root or start of the configuration. It is very similar to the semantics of a workflow of a function i.e. you can invoke this configuration by calling it by its name.

Node defines the target server on which the configuration related to it will be applied.

Resources are the most important part of the configuration file and they refer to the actual Configuration Items (CI) that you would like to monitor, evaluate and may be change its state after comparing with the desired state. Some of the resources that come out of the box with System Center 2012 R2 Eval are mentioned above. You should provide the name to a resource that should be logical to represent the actual resource that it might refer to.

Another important concept are the parameters within the resources section. In the above example couple of parameters are mentioned. Name refers to the actual resource that is available within the OS. In the above example, we have named one of the resource as XPS because there is a windows feature called “XPS-Viewer” available.

Sample Features


Ensure refers whether this configuration intents to make this resource available or not available. Ensure has 2 possible values: Present and Absent.

Present means that if the resource is not available it would be provisioned and made available whereas Absent means that if the resource is available it would be de-provisioned and made disabled.

Please note that the meaning of parameters could change depending on the type of resource. System Center 2012 R2 eval is consistent in its naming and usage.

In the next post we will look at some of the internals of the configuration we created above.


Solving SC Orchestrator Web Service HTTP 404 error

In the last blog post, we saw that SC Orchestrator R2 eval has taken steps to reduce the occurrence of HTTP 500 error by modifying the stored procedures responsible for calculating the authorization cache. In this post, we will look into the solution to the problem related to occurrence of HTTP 404 errors.

We saw that [Microsoft.SystemCenter.Orchestrator.Internal].AuthorizationCache table holds the records for authorizationcache and this authorization cache is cleared every 10 minutes and is also maintained (older authorization entry timestamp compared to current time is deleted) every 30 minutes. These authorization cache and their schedules are maintained within Orchestrator SQL database. The authorization cache is also created if the authorization cache is empty and a request is made to the web service.

As noted above, if we can clear the authorizationcache table and then invoke the web service, the authorization cache would be calculated from scratch and new Runbooks would find its way within the authorization cache.

Therefore steps to remove the HTTP 404 error are

1. Use the below SQL statement to empty the [Microsoft.SystemCenter.Orchestrator.Internal].AuthorizationCache table

Truncate table [Microsoft.SystemCenter.Orchestrator.Internal].AuthorizationCache table

2. Open Browser and browse the Web Service to view its metadata. This action will re-populate the authorization cache. The url is  generally of the form http://localhost:81/Orchestrator2012/Orchestrator.svc which could be different in your case.

Hope this post would help solve some of your Orchestrator web service issues.


Better Authorization Cache Management in SC Orchestrator R2 eval

One of issues in system center Orchestrator 2012 Sp1 web services is that it returned either http 404 or 500 error in a lot of situations. You can experience the same if you create a new runbook and immediately invoke the web services requesting for this new runbook. You will also notice that this new runbook is not visible within the Orchestrator Web console. There are primarily two reasons for these errors to occur.

  1. Authorization Cache

Orchestrator maintains an authorization cache for folders and Runbooks. This authorization cache reflects the users with their rights on the folders or Runbooks.

This authorization cache is cleared every 10 minutes and is also maintained (older authorization entry timestamp compared to current time is deleted) every 30 minutes. These authorization cache and their schedules are maintained within Orchestrator SQL database. The authorization cache is also created if the authorization cache is empty and a request is made to the web service. This authorization cache is used by the web service to determine the current user’s rights and permissions on the runbook or folder. This authorization cache is used by the Orchestrator web console for its operations as well.

Orchestrator does not remove table entries from database while you delete Runbooks and folder or while importing Runbooks. Instead, Orchestrator conducts a soft-delete i.e. it updates a column called [deleted] with ‘1’ as value. This causes the Orchestrator database to grow over a period of time while numerous imports and deletes happen and newer Runbooks are getting created.

The steps undertaken by Orchestrator to build the authorization cache are to execute a series of SQL stored procedures to fill the Table [Microsoft.SystemCenter.Orchestrator.Internal].AuthorizationCache. For filling up this table, Orchestrator queries the Database for all the folders and its Runbooks and this is one of the reasons for the http 404 and 500 errors. This is because Orchestrator is even processing records for folders and Runbooks that are in deleted state for calculating the authorization cache.

2. Web service SqlCommand Command timeout

The default setting for Command timeout of SQLCommand used within system center orchestrator web service 30 seconds. This does not take into consideration that there could be thousands of rows related to folders and Runbooks that could be used for calculating the authorization cache.

HTTP 404 error

HTTP 404 error occurs when an invocation is made to the web service for a folder or Runbook that still has not found its way to the authorization cache. Typically, this happens with a runbook created recently and the authorization maintenance (as mentioned above) has not yet been executed.

HTTP 500 error

HTTP 500 error occurs when an invocation is made to the web service for a folder or Runbook and any one of the below is happening

  • The authorization cache maintenance is in progress taking more than 30 seconds while the web service SQL Command timeout happens.
  • There are no entries in the authorization cache and Orchestrator takes more than 30 seconds to build the cache thereby again triggering the SQL Command timeout.

One of the change in SC Orchestrator R2 is better management of authorization cache to reduce the http 500 error. This has been done by changing the SQL Stored procedures. Now, the stored procedures for both folder and Runbook authorization cache has a filter condition that filters out all the deleted folders and Runbooks for the calculation of authorization cache. In effect, in R2 only those Folders and Runbooks are processed that are alive and kicking within the designer.

The SQL excerpt within the stored procedures looks like below.


Select [UniqueId] from dbo.Folders where Deleted = 0


Select [UniqueId] from dbo.Runbook where Deleted = 0

In next post, I would provide a solution to troubleshoot these HTTP 404 and 500 errors.

Hope you would find this post useful!!


SC Orchestrator : Return data belonging to multiple activities using single activity

I have often seen requirements related to returning data using “Return Data” activity. Developers adopt different approaches for returning data. Some of them are implemented using best practices while others becomes anti-patterns. One of the requirements that often arises as follows

1. There are multiple activities within a runbook. The count could be anything beginning from 2.

2. Each activity has the potential to fail and error out.

3. The runbook should return the error if there is a failure.

The typical approach to implement the above problem looks like below..

Typical Runbook  Implementation

Typical Runbook Implementation

A closer look into the above runbook will reveal that there are multiple “return Data” activity used to return error information to the caller of this runbook; one for each activity.

The above would work as expected but definitely not the most elegant way of returning data. One of the way or pattern to handle such a situation is to use the below pattern.

Pattern for Returning Error Data

Within the above runbook, there is a single “return error” activity responsible for returning the error message for failure of any activity within the runbook. However, it raises few questions, primarily..

1. What should the published data look like within the “Return Error” activity.

2. How will “Return Error” activity know which activity has failed.

Let’s try to find out answers to above questions through step by step implementation which would also form the pattern to solve above mentioned requirements..

A. Configure the Runbook to return some data. In this case, a single “ErrorMessage” is configured for returning the error message. Refer to below image.

Configure Runbook Return Parameters

Configure Runbook Return Parameters

B. Configure the “Return Error” activity.

1. Within the return error activity, within the “Error Message” input field right click and go to published data and click on the first activity for which error data needs to be returned and also select “Show common published data”. In this case, it is Copy file activity.

Select the first activity

Select the first activity

2. Once “Show common published data” is selected, more published data is made available for usage including “Error summary text”. Select the item “Error summary text” as published data to be returned.

Select published data to return

Select published data to return

3. Repeat the above step 1 and 2 for rest of other activities for which you want to return the error message. Eventually the input field for “Error Message” should look like below.

Published data

Published data

The published data is of the form {Error summary text from activity1}{Error summary text from activity2}{Error summary text from activity3}{Error summary text from activity4}{Error summary text from activity5}. In this pattern, the error message published from each activity is placed side by side to each other.

But how does it works…

At any point of time, if there is an error in the runbook – it has to be because of failure of any one activity. The next activity executed after the  failed activity would be “Return Error” activity. within the “ErrorMessage” Parameter, only one published data out of all the five published data would be filled up with value. Rest of the published data would have null values.

One more important point to remember for the above pattern to work is that the links between the activities and “Return Error” activity must be configured as below such that the “Return Error” activity is executed only if there is a “error” or “warning” while executing the activity.

Link Configuration between activity and Return error activity

Link Configuration between activity and Return error activity

That’s it!!!! We have answered both the questions as well as developed the pattern needed to return dynamic data based on single Return data activity.

Hope you would find this articles and Post useful!