Publish a website available only in some branches with VSTS build

I have several builds that publish some web projects using standard msbuild task. Here is a sample configuration.

Simple image that shows the configuration of a Msbuild Task used to publish a web project.

Figure 1: Publishing a web site with msbuild task.

This is super simple thanks to MsBuild task and a bit of MSBuild arguments, but quite often I face an annoying problem: what about a new project that lives only on certain branches, but I need to publish in the build only if exists?

Sometimes you need to execute some task in a build only if a file exists on disk (ex: a csproj file)

Suppose this new Jarvis.Catalog.Web project exists today in a feature called feature/xyz, then the feature will be closed to develop in the future then will be merged to a release branch and finally on master branch. This poses a problem, the MsBuild task to publish the web project will fail for every branch where that specific web project still does not exists.

This is super annoying because you can set the build not to fail if this specific task failed, but this will mistakenly mark all build as partially succeeded only because that project still is not on that branch.

Thanks to the conditional execution of VSTS it is simple to configure the task to execute only on specific condition, as an example run the task only if the build is triggered against feature/xyz branch.

This image shows the conditional execution for the task, everything is explained in the text of the post)

Figure 2: Conditional execution for tasks

This is not really a good solution, because there is still the need to edit the build, including branches to include after merges. When the feature/xyz will be merged back to develop, the build should be updated to run the task in develop branch, or feature branch.

The need to edit the conditional execution of the Task after the solution is “promoted” from branch to branch is annoying and error prone.

The real solution is: run the task only if the project file exists on disk, but this is a condition that is not present in the base set. The reason is: to solve this problem correctly the build should first run a Task that check if the file is on disk, that task should set a build variable and finally the MsBuild task should execute conditionally on that build variable.

To simplify this scenario lets use the new PowerShell task, that now has the option to run a simple script defined in the build. Here is the definition of this PowerShell task, that should be placed before the MSBuild task that is going to build the project.

Simple PowerShell tasks to executing an inline script

Figure 3: Simple PowerShell tasks to executing an inline script

The script is really simple, it just uses Test-Path cmdlet to verify if the file of the project exists on disk. Here is the whole script:

# You can write your powershell scripts inline here. 
# You can also pass predefined and custom variables to this scripts using arguments
$CatalogWebExists = Test-Path src\Catalog\Jarvis.Catalog.Web\Jarvis.Catalog.Web.csproj
write-host "##vso[task.setvariable variable=CatalogWebExists;]$CatalogWebExists"
write-host "CatalogWebExists=$CatalogWebExists"

Thanks to the ##vso directive, the task can set a variable of the build called CatalogWebExists and the cool part is that that variable is created if not defined in the build definition. The above example shows you how you can create a PowerShell tasks that change the value or creates a variable in the build.

After this script ran, we have a new CatalogWebExists build variable that has the value true if the file of the project exists. This allows to a better conditional expression for the MsBuild task.

and(succeeded(), eq(variables['CatalogWebExists'], 'true'))

This condition is really what we need, because it runs the task only if the build is successful and if the CatalogWebExists variable is equal to true. Thanks to a simple PowerShell script I can conditionally executes a task with a condition determined by the script.

Thanks to inline PowerShell scripts is really simple to execute a task with a condition evaluated by a PowerShell script.

In this example I’m able to run a script if a file exists, but you are not limited to this scenario.

Gian Maria.

Troubleshoot a failing build, a Winrm story

Many VSTS build and deploy tasks are based on Winrm to operate on a remote machine, one of the most common is the “Deploy Test Agent on” that will install a test agent on a remote machine.

This image shows the deploy test agent task on a standard build

Figure 1: Task to install a TestAgent on a different machine

If you are not in a domain Winrm can be a really thought opponent, especially because the target machine is not part of the same domain and is not trusted. Usually you configure the build, insert all password but when you run the build you incurr in an error like this one.

Error occured on ‘yourmachinename:5985’. Details : ‘Connecting to remote server jintegration.cyberpunk.local failed with the following error message : The WinRM client cannot process the request. If the authentication scheme is different from Kerberos, or if the client computer is not joined to a domain, then HTTPS transport must be used or the destination machine must be added to the TrustedHosts configuration setting. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. You can get more information about that by running the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic.’. For troubleshooting, refer 

This error can happen for various reasons, firewalls, trust, misconfigured winrdm and so on, but the annoying stuff is: each time you change some configuration trying solve the problem, you usually re-schedule a build and need to wait for the build to complete to understand if the problem is gone. This way to proceed is something that actually kills your productivity.

Whenever possible, try to verify if the build is fixed without queuing another entire build.

In the new VSTS Build, the agent is running simple tasks, most of the time they are composed by PowerShell scripts, so it is really better running scripts manually to verify that your problem is gone, instead of launching another entire build. In this scenario the problem is winrm configuration, and you can use a simple winrs command from the machine where the VSTS Agent is running.

winrs -r:jintegration.cyberpunk.local /u:.\administrator /p:myP@ssword dir

This simple command will try to execute the dir command on the computer jintegration.cyberpunk.local using winrm, if you see the result of the dir command, it does mean that WinRs is configured and the computer can be contacted and it accepts WinRm commands. If you have any error, you should check your configuration and retry again. Once winrs command runs fine, communication between the two machine is ok. The important aspect is that until winrs command gives you error, you can be 100% sure that your build will not complete.

Replicating the commands issued by the build agent outside the build, greatly reduces the time needed to solve the problem.

In my situation here are the set of commands I’ve run to have Winrs command to works.

1) Ensure that winrm is enabled on both computer, do this with command winrm quickconfig
2) Verify that on target computer port 5985 is opened for connection
3) Run in both computers the command: winrm s winrm/config/client ‘@{TrustedHosts=”RemoteComputer”}’  where RemoteComputer is the name of the other computer. If you want to do a quick test you can specify * as Remote Computer name

After these three steps I was able to execute the WinRs command. Now I queued another build to verify if the task is now working ok.

The image shows the output of the step "Deploy test agent" and it shows that now the build agent was capable of using winrm to connect to target machine.

Figure 2: Deploy test agent task now runs without error

Actually I’ve done several tentative to troubleshoot the reason of the error, but since I checked each time with a simple winrs command (instead of waiting 4 minute build to run) the total time to troubleshoot the issue was few minutes instead of an hour or more.

Gian Maria.

Run Pester in VSTS Build

I’m not a great expert of PowerShell, but during last years I’ve written some custom utilities I’m using for various projects. The main problem is that I’ve scattered all these scripts on multiple projects and usually I need time to find the latest version of a script that does X.

Scattering PowerShell scripts all around your projects lead to error and a maintenance nightmare

To avoid this problem, the obvious solution is starting a consolidation of PowerShell scripts and the obvious location is a Git repository hosted in VSTS. Now that I’m starting script consolidation, I want also to create some Unit Test With Pester to helps me developing and obviously I want Pester Unit Tests to run inside a VSTS Build.

I had some problems running inside Hosted Build, because I got lots of errors when I tried to install Pester module. Luckly enough my friend @turibbio  gave me this link that helps me to solve the problem.

My final script is

    [string] $outputFile = 'TestRun.xml'
Install-PackageProvider -Name NuGet -Force -Scope CurrentUser
Install-Module -Name Pester -Force -Verbose -Scope CurrentUser

Import-Module Pester
Invoke-Pester -OutputFile $outputFile -OutputFormat NUnitXml

This simple script accepts only a single parameter, the output file for the test, it install the package provider nuget, then it install pester, and finally import pester module and invoke pester on the current directory.

While VSTS build system is really simple to extend, it is better to create a script that runs all test, so you can use both during local development and during VSTS Buidl

To have my test to be imported in VSTS build results, I’ve configured pester to output the file in NunitXml format. Creating a build is really simple, and it is composed of only three tasks.


Figure 1: Simple three step builds to run Pester tests on my PowerShell Scripts

As you can see I use the GitVersion task to have a nice descriptive version for my build; Pester is run a PowerShell task and finally Publish Test Results task is used to upload test result to the result of the build. Now I have a nice build results that have a GitVersion semantic name and also have the summary of the tests


Figure 2: Result of Pester Test run is included in build.

The ability to run PowerShell scripts inside a build, and publishing test results from various output format is what makes VSTS really simple to use to create build for every language, not only for .NET or Java but also for PowerShell.

Gian Maria.

How to manage PowerShell installation scripts

In  previous post I explained how I like to release software using a simple paradigm:

build produces one zipped file with everything needed for a release, then a PowerShell scripts accepts the path of this zipped release and installation parameters and executes every step to install/upgrade the software.

This approach has numerous advantages, first of all you can always test script with PowerShell ISE in a Developer Machine. Just download from build artifacts the version you want to use for test, load installation script in PowerShell ISE, then run the script, and if something went wrong (the script has a bug or needs to be updated) just debug and modify it until it works.

My suggestion is to use Virtual Machines with snapshots. The process is,

restore a snapshot of the machine without the software installed, then run the script, if some error occurred, just restore the snapshot, fix the script, and run again.

You can do the very same using a snapshot of the VM where the software has a previous version of the software installed, so you can verify that the script works even for an upgrade, not only for a fresh installation. This is a really simple process, that does not involve any tool related to any release technology.

The ability to debug script using VM and snapshot, is a big speedup for release script development. If you are using some third part engine for Release Management software, probably you will need to trigger a real release to verify your script. Another advantage is that this process allows you to do a manual installation where you can simply launch the script and manually verify if everything is good.

You should store all scripts along with source control, this allows you to:

1) Keep scripts aligned with the version of the software they install. If a modification of the software requires change in installation scripts, they are maintained togheter.
2) You can publish script directory as build artifacts, so each build contains both the zipped release, and the script needed to install it.
3) History of the scripts allows you to track the entire lifecycle of the script, and you can understand why someone changed the script in version Y of your software.
4) You can test installation script during builds or run during a Build for a quick release on a test environment

Final advantage is: the script runs on every Windows machine, without the need to use tasks, agents, or other stuff. Once the script is ready to go, you first start testing in DEV, QA and Production environment manually. Manual installation is really simple, just download artifacts from the build, run the script, check script log and manually checks that everything is ok. If something went bad (in DEV or QA) open a Bug and let Developers fix the script until everything is ok.

Once the script start to be stable, you can proceed to automate the release.

Gian Maria.

Using PowerShell scripts to deploy your software

I often use PowerShell scripts to package a “release” of a software during a build because it gives me a lots of flexibility.

Different approaches for publishing Artifacts in build vNext

The advantage of using PowerShell is complete control over what will be included in the “release” package. This allows you to manipulate configuration files, remove unnecessary files, copy files from somewhere else in the repository, etc etc.

The aim of using PowerShell in a build is creating a single archive that contains everything needed to deploy a new release

This is the first paradigm, all the artifacts needed to install the software should be included in one or more zip archives. Once you have this, the only step that separates you from Continuous Deployment is creating another scripts that is capable of using that archive to install the software in the current system. This is the typical minimum set of parameters of such a script.

    [string] $deployFileName,
    [string] $installationRoot

This scripts the name of the package file and the path where the software should be installed. More complex program accepts other configuration parameteres, but this is the simplest situation for a simple software that runs as a service in Windows and needs no configuration. The script does some preparation and then start the real installing phase.

$service = Get-Service -Name "Jarvis - Configuration Service" -ErrorAction SilentlyContinue 
if ($service -ne $null) 
    Stop-Service "Jarvis - Configuration Service"

Write-Output 'Deleting actual directory'

if (Test-Path $installationRoot) 
    Remove-Item $installationRoot -Recurse -Force

Write-Output "Unzipping setup file"
Expand-WithFramework -zipFile $file.FullName -destinationFolder $installationRoot

if ($service -eq $null) 
    Write-Output "Starting the service in $finalInstallDir\Jarvis.ConfigurationService.Host.exe"

    & "$installationRoot\Jarvis.ConfigurationService.Host.exe" install

First step is obtaining a reference to a Windows service called “Jarvis – Configuration Service”, if the service is present the script stops it and wait for it to be really stopped. Once the service is stopped it deletes current directory, and then, extract all the files contained in the zipped archive to the same directory. If the service was not present (it is the very first installation) it invokes the executable with the install option (we are using TopShelf).

The goal  of the script is being able to work for a first time installation, as well of subsequent installations.

A couple of aspect are interesting in this approach: first, the software does not have any specific configuration in the installation directory, when it is time to update, the script delete everything, and then copy the new version on the same path. Second, the script is made to work if the service was already installed, or if this is the very first installation.

Since this simple software indeed uses some configuration in the app.config file, it is duty of the scripts to reconfigure the script after the deploy.

$configFileName = $installationRoot + "\Jarvis.ConfigurationService.Host.exe.config"
$xml = (Get-Content $configFileName)
Edit-XmlNodes $xml -xpath "/configuration/appSettings/add[@key='uri']/@value" -value "http://localhost:55555"
Edit-XmlNodes $xml -xpath "/configuration/appSettings/add[@key='baseConfigDirectory']/@value" -value "..\ConfigurationStore"


This snippet of code uses an helper function to change the configuration file with XPath. Here there is another assumption: no-one should change configuration file, because it will be overwritten on the next installation. All the configurations that are contained in configuration file should be passed to the installation script.

The installation script should accept any configuration that need to be store in application configuration file. This is needed to allow for a full overwrite approac, where the script deletes all previous file and overwrite them with the new version.

In this example we change the port the service is listening and change the directory where this service will store configuration files (it is a configuration file manager). The configuration store is set to ..\ConfigurationStore, a folder outside the installation directory. This will preserve content of that folder on the next setup.

To simplify updates, you should ensure that it is safe to delete old installation folder and overwrite with a new one during upgrade. No configuration or no modification must be necessary in files that are part of the installation.

The script uses hardcoded values: port 55555 and ..\ConfigurationStore folder, if you prefer, you can pass these values as parameter of the installation script. The key aspect here is: Every configuration file that needs to be manipulated and parametrized after installation, should be placed in other directory. We always ensure that installation folder can be deleted and recreated by the script.

This assumption is strong, but avoid complication of  installation scripts, where the script needs to merge default settings of the new version of the software with the old settings of previous installation. For this reason, the Configuration Service uses a folder outside the standard installation to store configuration.

All scripts can be found in Jarvis Configuration Service project.

Gian Maria.