Troubleshoot a failing build, a Winrm story

Many VSTS build and deploy tasks are based on Winrm to operate on a remote machine, one of the most common is the “Deploy Test Agent on” that will install a test agent on a remote machine.

This image shows the deploy test agent task on a standard build

Figure 1: Task to install a TestAgent on a different machine

If you are not in a domain Winrm can be a really thought opponent, especially because the target machine is not part of the same domain and is not trusted. Usually you configure the build, insert all password but when you run the build you incurr in an error like this one.

Error occured on ‘yourmachinename:5985’. Details : ‘Connecting to remote server jintegration.cyberpunk.local failed with the following error message : The WinRM client cannot process the request. If the authentication scheme is different from Kerberos, or if the client computer is not joined to a domain, then HTTPS transport must be used or the destination machine must be added to the TrustedHosts configuration setting. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. You can get more information about that by running the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic.’. For troubleshooting, refer https://aka.ms/remotevstest. 

This error can happen for various reasons, firewalls, trust, misconfigured winrdm and so on, but the annoying stuff is: each time you change some configuration trying solve the problem, you usually re-schedule a build and need to wait for the build to complete to understand if the problem is gone. This way to proceed is something that actually kills your productivity.

Whenever possible, try to verify if the build is fixed without queuing another entire build.

In the new VSTS Build, the agent is running simple tasks, most of the time they are composed by PowerShell scripts, so it is really better running scripts manually to verify that your problem is gone, instead of launching another entire build. In this scenario the problem is winrm configuration, and you can use a simple winrs command from the machine where the VSTS Agent is running.

winrs -r:jintegration.cyberpunk.local /u:.\administrator /p:myP@ssword dir

This simple command will try to execute the dir command on the computer jintegration.cyberpunk.local using winrm, if you see the result of the dir command, it does mean that WinRs is configured and the computer can be contacted and it accepts WinRm commands. If you have any error, you should check your configuration and retry again. Once winrs command runs fine, communication between the two machine is ok. The important aspect is that until winrs command gives you error, you can be 100% sure that your build will not complete.

Replicating the commands issued by the build agent outside the build, greatly reduces the time needed to solve the problem.

In my situation here are the set of commands I’ve run to have Winrs command to works.

1) Ensure that winrm is enabled on both computer, do this with command winrm quickconfig
2) Verify that on target computer port 5985 is opened for connection
3) Run in both computers the command: winrm s winrm/config/client ‘@{TrustedHosts=”RemoteComputer”}’  where RemoteComputer is the name of the other computer. If you want to do a quick test you can specify * as Remote Computer name

After these three steps I was able to execute the WinRs command. Now I queued another build to verify if the task is now working ok.

The image shows the output of the step "Deploy test agent" and it shows that now the build agent was capable of using winrm to connect to target machine.

Figure 2: Deploy test agent task now runs without error

Actually I’ve done several tentative to troubleshoot the reason of the error, but since I checked each time with a simple winrs command (instead of waiting 4 minute build to run) the total time to troubleshoot the issue was few minutes instead of an hour or more.

Gian Maria.

Run Pester in VSTS Build

I’m not a great expert of PowerShell, but during last years I’ve written some custom utilities I’m using for various projects. The main problem is that I’ve scattered all these scripts on multiple projects and usually I need time to find the latest version of a script that does X.

Scattering PowerShell scripts all around your projects lead to error and a maintenance nightmare

To avoid this problem, the obvious solution is starting a consolidation of PowerShell scripts and the obvious location is a Git repository hosted in VSTS. Now that I’m starting script consolidation, I want also to create some Unit Test With Pester to helps me developing and obviously I want Pester Unit Tests to run inside a VSTS Build.

I had some problems running inside Hosted Build, because I got lots of errors when I tried to install Pester module. Luckly enough my friend @turibbio  gave me this link that helps me to solve the problem.

My final script is

Param(
    [string] $outputFile = 'TestRun.xml'
)
Install-PackageProvider -Name NuGet -Force -Scope CurrentUser
Install-Module -Name Pester -Force -Verbose -Scope CurrentUser

Import-Module Pester
Invoke-Pester -OutputFile $outputFile -OutputFormat NUnitXml

This simple script accepts only a single parameter, the output file for the test, it install the package provider nuget, then it install pester, and finally import pester module and invoke pester on the current directory.

While VSTS build system is really simple to extend, it is better to create a script that runs all test, so you can use both during local development and during VSTS Buidl

To have my test to be imported in VSTS build results, I’ve configured pester to output the file in NunitXml format. Creating a build is really simple, and it is composed of only three tasks.

image

Figure 1: Simple three step builds to run Pester tests on my PowerShell Scripts

As you can see I use the GitVersion task to have a nice descriptive version for my build; Pester is run a PowerShell task and finally Publish Test Results task is used to upload test result to the result of the build. Now I have a nice build results that have a GitVersion semantic name and also have the summary of the tests

image

Figure 2: Result of Pester Test run is included in build.

The ability to run PowerShell scripts inside a build, and publishing test results from various output format is what makes VSTS really simple to use to create build for every language, not only for .NET or Java but also for PowerShell.

Gian Maria.

How to manage PowerShell installation scripts

In  previous post I explained how I like to release software using a simple paradigm:

build produces one zipped file with everything needed for a release, then a PowerShell scripts accepts the path of this zipped release and installation parameters and executes every step to install/upgrade the software.

This approach has numerous advantages, first of all you can always test script with PowerShell ISE in a Developer Machine. Just download from build artifacts the version you want to use for test, load installation script in PowerShell ISE, then run the script, and if something went wrong (the script has a bug or needs to be updated) just debug and modify it until it works.

My suggestion is to use Virtual Machines with snapshots. The process is,

restore a snapshot of the machine without the software installed, then run the script, if some error occurred, just restore the snapshot, fix the script, and run again.

You can do the very same using a snapshot of the VM where the software has a previous version of the software installed, so you can verify that the script works even for an upgrade, not only for a fresh installation. This is a really simple process, that does not involve any tool related to any release technology.

The ability to debug script using VM and snapshot, is a big speedup for release script development. If you are using some third part engine for Release Management software, probably you will need to trigger a real release to verify your script. Another advantage is that this process allows you to do a manual installation where you can simply launch the script and manually verify if everything is good.

You should store all scripts along with source control, this allows you to:

1) Keep scripts aligned with the version of the software they install. If a modification of the software requires change in installation scripts, they are maintained togheter.
2) You can publish script directory as build artifacts, so each build contains both the zipped release, and the script needed to install it.
3) History of the scripts allows you to track the entire lifecycle of the script, and you can understand why someone changed the script in version Y of your software.
4) You can test installation script during builds or run during a Build for a quick release on a test environment

Final advantage is: the script runs on every Windows machine, without the need to use tasks, agents, or other stuff. Once the script is ready to go, you first start testing in DEV, QA and Production environment manually. Manual installation is really simple, just download artifacts from the build, run the script, check script log and manually checks that everything is ok. If something went bad (in DEV or QA) open a Bug and let Developers fix the script until everything is ok.

Once the script start to be stable, you can proceed to automate the release.

Gian Maria.

Using PowerShell scripts to deploy your software

I often use PowerShell scripts to package a “release” of a software during a build because it gives me a lots of flexibility.

Different approaches for publishing Artifacts in build vNext

The advantage of using PowerShell is complete control over what will be included in the “release” package. This allows you to manipulate configuration files, remove unnecessary files, copy files from somewhere else in the repository, etc etc.

The aim of using PowerShell in a build is creating a single archive that contains everything needed to deploy a new release

This is the first paradigm, all the artifacts needed to install the software should be included in one or more zip archives. Once you have this, the only step that separates you from Continuous Deployment is creating another scripts that is capable of using that archive to install the software in the current system. This is the typical minimum set of parameters of such a script.

param(
    [string] $deployFileName,
    [string] $installationRoot
)

This scripts the name of the package file and the path where the software should be installed. More complex program accepts other configuration parameteres, but this is the simplest situation for a simple software that runs as a service in Windows and needs no configuration. The script does some preparation and then start the real installing phase.

$service = Get-Service -Name "Jarvis - Configuration Service" -ErrorAction SilentlyContinue 
if ($service -ne $null) 
{
    Stop-Service "Jarvis - Configuration Service"
    $service.WaitForStatus("Stopped")
}

Write-Output 'Deleting actual directory'

if (Test-Path $installationRoot) 
{
    Remove-Item $installationRoot -Recurse -Force
}

Write-Output "Unzipping setup file"
Expand-WithFramework -zipFile $file.FullName -destinationFolder $installationRoot

if ($service -eq $null) 
{
    Write-Output "Starting the service in $finalInstallDir\Jarvis.ConfigurationService.Host.exe"

    & "$installationRoot\Jarvis.ConfigurationService.Host.exe" install
} 

First step is obtaining a reference to a Windows service called “Jarvis – Configuration Service”, if the service is present the script stops it and wait for it to be really stopped. Once the service is stopped it deletes current directory, and then, extract all the files contained in the zipped archive to the same directory. If the service was not present (it is the very first installation) it invokes the executable with the install option (we are using TopShelf).

The goal  of the script is being able to work for a first time installation, as well of subsequent installations.

A couple of aspect are interesting in this approach: first, the software does not have any specific configuration in the installation directory, when it is time to update, the script delete everything, and then copy the new version on the same path. Second, the script is made to work if the service was already installed, or if this is the very first installation.

Since this simple software indeed uses some configuration in the app.config file, it is duty of the scripts to reconfigure the script after the deploy.

$configFileName = $installationRoot + "\Jarvis.ConfigurationService.Host.exe.config"
$xml = (Get-Content $configFileName)
 
Edit-XmlNodes $xml -xpath "/configuration/appSettings/add[@key='uri']/@value" -value "http://localhost:55555"
Edit-XmlNodes $xml -xpath "/configuration/appSettings/add[@key='baseConfigDirectory']/@value" -value "..\ConfigurationStore"

$xml.save($configFileName)

This snippet of code uses an helper function to change the configuration file with XPath. Here there is another assumption: no-one should change configuration file, because it will be overwritten on the next installation. All the configurations that are contained in configuration file should be passed to the installation script.

The installation script should accept any configuration that need to be store in application configuration file. This is needed to allow for a full overwrite approac, where the script deletes all previous file and overwrite them with the new version.

In this example we change the port the service is listening and change the directory where this service will store configuration files (it is a configuration file manager). The configuration store is set to ..\ConfigurationStore, a folder outside the installation directory. This will preserve content of that folder on the next setup.

To simplify updates, you should ensure that it is safe to delete old installation folder and overwrite with a new one during upgrade. No configuration or no modification must be necessary in files that are part of the installation.

The script uses hardcoded values: port 55555 and ..\ConfigurationStore folder, if you prefer, you can pass these values as parameter of the installation script. The key aspect here is: Every configuration file that needs to be manipulated and parametrized after installation, should be placed in other directory. We always ensure that installation folder can be deleted and recreated by the script.

This assumption is strong, but avoid complication of  installation scripts, where the script needs to merge default settings of the new version of the software with the old settings of previous installation. For this reason, the Configuration Service uses a folder outside the standard installation to store configuration.

All scripts can be found in Jarvis Configuration Service project.

Gian Maria.

Avoid using Shell command in PowerShell scipts

I have setup scripts that are used to install software, they are simply based on this paradigm

The build produces a zip file that contains everything needed to install the software, then we have a script that accepts the zip file as parameter as well as some other parameters and does install sofwtare on a local machine

This simple paradigm is perfect, because we can manually install a software launching powershell, or we can create a Chocolatey package to automate the installation. Clearly you can use the very same script to release the software inside TFS Release Management.

When PowerShell script runs in the RM Agent (or in a build agent) it has no access to the Shell. This is usually the default, because the agents does not run interactively and usually in Release Management, PowerShell scripts are executed remotely. This fact imply that you should not use anything related to shell in your script. Unfortunately, if you looks in the internet for code to unzip a zip file with PowerShell you can find code that uses shell.application object:

function Expand-WithShell(
    [string] $zipFile,
    [string] $destinationFolder,
    [bool] $deleteOld = $true,
    [bool] $quietMode = $false) 
{
    $shell = new-object -com shell.application

    if ((Test-Path $destinationFolder) -and $deleteOld)
    {
          Remove-Item $destinationFolder -Recurse -Force
    }

    New-Item $destinationFolder -ItemType directory

    $zip = $shell.NameSpace($zipFile)
    foreach($item in $zip.items())
    {
        if (!$quietMode) { Write-Host "unzipping " + $item.Name }
        $shell.Namespace($destinationFolder).copyhere($item)
    }
}

This is absolutely a problem if you run a script that uses this function in a build or RM agent. A real better alternative is a funcion that uses classes from Framework.

function Expand-WithFramework(
    [string] $zipFile,
    [string] $destinationFolder,
    [bool] $deleteOld = $true
)
{
    Add-Type -AssemblyName System.IO.Compression.FileSystem
    if ((Test-Path $destinationFolder) -and $deleteOld)
    {
          Remove-Item $destinationFolder -Recurse -Force
    }
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destinationFolder)
}

With this function you are not using anything related to Shell, and it can be run without problem even during a build or a release management. As you can see the function is simple and probably is a better solution even for interactive run of the script.

Another solution is using 7zip from command line, it gives you a better compression, it is free, but you need to install into any server where you want to run the script. This imply that probably zip files are still the simplest way to package your build artifacts for deployment.

Gian Maria.