How to manage PowerShell installation scripts

In  previous post I explained how I like to release software using a simple paradigm:

build produces one zipped file with everything needed for a release, then a PowerShell scripts accepts the path of this zipped release and installation parameters and executes every step to install/upgrade the software.

This approach has numerous advantages, first of all you can always test script with PowerShell ISE in a Developer Machine. Just download from build artifacts the version you want to use for test, load installation script in PowerShell ISE, then run the script, and if something went wrong (the script has a bug or needs to be updated) just debug and modify it until it works.

My suggestion is to use Virtual Machines with snapshots. The process is,

restore a snapshot of the machine without the software installed, then run the script, if some error occurred, just restore the snapshot, fix the script, and run again.

You can do the very same using a snapshot of the VM where the software has a previous version of the software installed, so you can verify that the script works even for an upgrade, not only for a fresh installation. This is a really simple process, that does not involve any tool related to any release technology.

The ability to debug script using VM and snapshot, is a big speedup for release script development. If you are using some third part engine for Release Management software, probably you will need to trigger a real release to verify your script. Another advantage is that this process allows you to do a manual installation where you can simply launch the script and manually verify if everything is good.

You should store all scripts along with source control, this allows you to:

1) Keep scripts aligned with the version of the software they install. If a modification of the software requires change in installation scripts, they are maintained togheter.
2) You can publish script directory as build artifacts, so each build contains both the zipped release, and the script needed to install it.
3) History of the scripts allows you to track the entire lifecycle of the script, and you can understand why someone changed the script in version Y of your software.
4) You can test installation script during builds or run during a Build for a quick release on a test environment

Final advantage is: the script runs on every Windows machine, without the need to use tasks, agents, or other stuff. Once the script is ready to go, you first start testing in DEV, QA and Production environment manually. Manual installation is really simple, just download artifacts from the build, run the script, check script log and manually checks that everything is ok. If something went bad (in DEV or QA) open a Bug and let Developers fix the script until everything is ok.

Once the script start to be stable, you can proceed to automate the release.

Gian Maria.

Checklists are prerequisites for Release Automation

I’ve dealt in some posts on how to deploy an application with a PowerShell script that uses an archive produced by a build. Automating a release could be simple or complex, depending on the nature of the software to be deployed, but there is a single suggestion that I always keep in my mind:

If you don’t have one or more Checklists for manual installation of a software do not even try to autmate installation process

Automatic Release and Continuous Deployment is a real key part of DevOps culture, but before starting to write automation scripts, it is necessary to ask yourself: Do I really know every single step to be done in order to release the software?

This seems a simple question, but in my experience people always tends to forget installation steps: obscure settings in windows registry or in some files, operating system settings, prerequisites, service pack and so on.

Since the devil is in the details, deploying a software manually without a cheklist is almost impossible, because the knowlege on “how to deploy the software” is scattered among team members.

Scripts are stupid, there is no human intelligence behind a script, it simply does what you told it to do, nothing more.

Before even try to write a script, you should write one or more CheckLists for the installation: PreRequisite Checklist, Installation Checklist, Post Installation Checklist and so on. Each checklist should contain a series of tests and tasks to be accomplished in order to release the software. If you are not able to take a person, give her/him installation checklists and have her/him install the software successfully, how you can think to write a script that automate the process?

The correct process to create automated script is:

1) Generate PreRequisite, Installation, PostInstallation CheckList.
2) Run them manually until they are complete and correct, write down the time needed for each step
3) Start automating the most time consuming steps.
4) Proceed until every step is automated.

Usually it is not necessary to immediately try to automate everything, as an example, if you use virtual machine, you can use golden images to use machines with all prerequisites already installed. This simply the deployment process because you can avoid writing PowerShell DSC, Puppet or Chef scripts to install Prerequisites.

If a specific installation step is difficult to automate (because there is some human intelligence behind it) let this task to be executed manually, but automate other expensive part of the checklist. There is always value in automating everything that can be easily automated.

Try to use Virtual Machine templates and sysprepped VM to simplify setup of prerequisites. Start creating automation for operations that are repeated for each upgrade, not those ones that are done only once.

Step by step you will eventually reach the point where you automated everything, and you can start to do Continous Deployment. If you followed CheckList and rigorous process, your deployment environment will be stable. If you immediately jump in writing automation scripts, even before knowing all the operation required to install the software (after all it is a simple web site), you will end with a fragile Automatic Deployment Environment, that will eventually require more maintenance time than a manual install.

Gian Maria.

Running Unit Tests on different machine during TFS 2015 build

First of all I need to thanks my friend Jackob Ehn that pointed me to the right direction to create a particular build.  In this post I’ll share with you my journey to run tests on a different machine than the one that is running the build.

For some build it is interesting to have the ability to run some Unit Test (nunit in my scenario) on a machine different from that one that is running the build. There are a lot of legitimate reasons for doing this, for a project I’m working with, to run a set of test I need to have a huge amount of pre-requisites installed (LibreOffice, ghostscript, etc). Instead of installing those prerequisite on all agent machines, or install those one on a single build agent and using capabilities, I’d like to being able to run the build on any build agent, but run the test in a specific machine that had all the prerequisite installed.

Sometimes it is necessary to run tests during build on machine different from that one where the build agent is running.

The solution is quite simple, because VSTS / TFS already had all build tasks needed to solve my need and to execute tests on different machine.

The very first steps is copying all the dlls that contains tests on the target machine, this is accomplished by the Windows Machine File Copy task.

image 

Figure 1: File copy task in action

This is a really simple task, the only suggestion is to never specify the password in clear format, because everyone that can edit the build can read the password. In this situation the password is stored in the RmTestAdminPassword variable, and that variable is setup as secret.

image

Figure 2: Store sensitive information as secret variables of the build

Then we need to add a Visual Studio Test Agent Deployment task, to deploy Visual Studio Test Runner on target machine.

image

Figure 3: Visual Studio Test Agent Deployment

Configuration is straightforward, you need to specify a machine group or a list of target machines (point 1), then you should specify the user that will be used to run test agent (point 2), finally I’ve specified a custom location in my network for the Test Agent Installer. If you do not specify anything, the agent is downloaded from http://go.microsoft.com/fwlink/?LinkId=536423 but this will download approximately 130MB of data. For faster build it would be preferrable to download the agent and move the installer in a shared network folder to instruct the Task to grab the agent from that location.

Finally you use the Run Functional Tests task to actually execute tests in the target machine.

image

Figure 4: Running functional test from target machine configuration

You specify the machine(s) you want to use (point 1), then all the dll that contains tests (point 2) and you can also specify code coverage (point 3). Even if the task is called Run Functional Tests, it actually use Visual Studio Test runner to run tests, so you can run whatever test you like.

Thanks to TFS 2015 / VSTS build, we already have all tasks needed to run Unit Tests on target machines.

If you are running Nunit test or whatever test framework different from MsTest, this task will fail, because the target machine has no test adapter to run the test. The failure output tells you that the agent was not capable of finding any test to run in specified location. This happens even if you added Nuget Nunit adapter to the project. The solution is simple, first of all locates all needed dll in package location of your project.

image

Figure 5: Installing Nunit TestAdapter Nuget package, downloads all required dll to your machine

Once you located those four dll in your HD, you should copy them to the target machine in folder: C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\Extensions, this folder was created by Visual Studio Test Agent deployment task and will contain all extension that will be automatically loaded by Visual Studio Test Agent. Once you copied the dll, those machine will be able to run nunit test without problem.

Once you copied all required dll in target folder, re-run the build, and verify that tests are indeed executed on the target machine.

SNAGHTML3c4ad0

Figure 6: Output of running tests on remote machine

Test output is transferred to the build machine, and attached to the build result as usual, so you do not need anything else to visualize test result in the same way as if the test were executed by agent machine.

image

Figure 7: Output of test is included in the build output like standard Unit Tests ran by the build agent

Once tasks are in place, everything is carried over by the test agent, test results are downloaded and attached to the build results, as for standard unit tests executed on Build Agent machine.

The only drawback of this approach is that it needs some times (in my system about 30 seconds) before the test started execution in target machine, but apart this problem, you can execute tests on remote machine with little effort.

Gian Maria.

Create a Release with PowerShell, Zipped Artifacts and Chocolatey

In previous post I described how to create a simple PowerShell scripts that is capable of installing a software starting from a zipped file that contains the “release” of a software (produced by a build) and some installation parameters. Once you have this scenario up and running, releasing your software automatically is quite simple.

Once you automated the installation with a PowerShell script plus an archive file with Everything is needed to install the software, you are only a step away from Continuous Deployment.

A possible release vector is Chocolatey, a package manager for Windows based on NuGet. I’m not a Chocolatey expert, but at the very basic level, to create a chocolatey package you should only have a bunch of files and a PowerShell script that install the software.

The most basic form of a Chocolatey package is composed by files that contains artifacts to be installed and a PowerShell script that does the installation.

The very first step is creating a .nuspec file that will define what will be contained in Chocolatey package. Here is a possible content of release.nuspec file.

<package>
  <metadata>
  …
  </metadata>

   <files>
        <file src=”Tools\Setup\chocolateyInstall.ps1″ target=”Tools” />
        <file src=”Tools\Setup\ConfigurationManagerSetup.ps1″ target=”Tools” />
        <file src=”Tools\Setup\JarvisUtils.psm1″ target=”Tools” />
        <file src=”release\Jarvis.ConfigurationService.Host.zip” target=”Artifacts” />
  </files>

</package>

The real interesting part of this file is the <files> section, where I simply list all the files I want to be included in the package. ConfigurationManagerSetup.ps1 and JarvisUtils.psm1 are the original installation script I wrote, and Jarvis.ConfigurationService.Host.zip is the name of the artifacts generated by the PackRelease.ps1 script.

To support Chocolatey you only need to create another script, called chocolateyInstall.ps1 that will be called by chocolatey to install the software. This script is really simple, it needs to parse Chocolatey input parameters and invoke the original installation script to install the software.

Write-Output "Installing Jarvis.ConfigurationManager service. Script running from $PSScriptRoot"

$packageParameters = $env:chocolateyPackageParameters
Write-Output "Passed packageParameters: $packageParameters"

$arguments = @{}

# Script used to parse parameters is taken from https://github.com/chocolatey/chocolatey/wiki/How-To-Parse-PackageParameters-Argument

# Now we can use the $env:chocolateyPackageParameters inside the Chocolatey package
$packageParameters = $env:chocolateyPackageParameters

# Default the values
$installationRoot = "c:\jarvis\setup\ConfigurationManager\ConfigurationManagerHost"

# Now parse the packageParameters using good old regular expression
if ($packageParameters) {
    $match_pattern = "\/(?([a-zA-Z]+)):(?([`"'])?([a-zA-Z0-9- _\\:\.]+)([`"'])?)|\/(?([a-zA-Z]+))"
    $option_name = 'option'
    $value_name = 'value'

    Write-Output "Parameters found, parsing with regex";
    if ($packageParameters -match $match_pattern )
    {
       $results = $packageParameters | Select-String $match_pattern -AllMatches
       $results.matches | % {
            
            $arguments.Add(
                $_.Groups[$option_name].Value.Trim(),
                $_.Groups[$value_name].Value.Trim())
       }
    }
    else
    {
        Throw "Package Parameters were found but were invalid (REGEX Failure)"
    }

    if ($arguments.ContainsKey("installationRoot")) {

        $installationRoot = $arguments["installationRoot"]
        Write-Output "installationRoot Argument Found: $installationRoot"
    }
} else 
{
    Write-Output "No Package Parameters Passed in"
}

Write-Output "Installing ConfigurationManager in folder $installationRoot"

$artifactFile = "$PSScriptRoot\..\Artifacts\Jarvis.ConfigurationService.Host.zip"

if(!(Test-Path -Path $artifactFile ))
{
     Throw "Unable to find package file $artifactFile"
}

Write-Output "Installing from artifacts: $artifactFile"

if(!(Test-Path -Path "$PSScriptRoot\ConfigurationManagerSetup.ps1" ))
{
     Throw "Unable to find package file $PSScriptRoot\ConfigurationManagerSetup.ps1"
}


if(-not(Get-Module -name jarvisUtils)) 
{
    Import-Module -Name "$PSScriptRoot\jarvisUtils"
}

&amp; $PSScriptRoot\ConfigurationManagerSetup.ps1 -deployFileName $artifactFile -installationRoot $installationRoot

This script does almost nothing, it delegates everything to the ConfigurationManagerSetup.ps1 file. Once the .nuspec file and this script are writenn, you can use nuget.exe to manually generate a package and verify installing in a target system.

When everything is ok and you are able to create and install a chocolatey package locally, you can schedule Chocolatey package creation with a TFS Build. The advantage is automatic publishing and SemVer version management done with Gitversion.exe.

image 

Figure 1: Task to generate Chocolatey Package.

To generate Chocolatey Package you can use the standard NuGet Packager task, because Chocolatey uses the very same technology as NuGet. As you can see from Figure 1 the task is really simple and needs very few parameters to work. If you like more detail I’ve blogged in the past about using this task to publish NuGet packages.

Publish a Nuget Package to NuGet/MyGet with VSO Build

This technique is perfectly valid for VSTS or TFS 2015 on-premises. Once the build is finished, and the package is published, you should check on NuGet or MyGet if the package is ok.

image

Figure 2: Package listing on MyGet, I can verify all published versions

Now you can install with this command

choco install Jarvis.ConfigurationService.Host
 	-source 'https://www.myget.org/F/jarvis/api/v2'
	-packageParameters "/installationRoot:C:\Program files\Proximo\ConfigurationManager" 
   -force 

All parameters needed by the installation scripts should be specified with the –packageParameters command line options. It is duty of chocolateyInstall.ps1 parsing this string and pass all parameters to the original installation script.

If you want to install a prerelease version (the ones with a suffix) you need to specify the exact version and use the –pre parameter.

A working example of scripts and nuspec file can be found in ConfigurationManager project.

Gian Maria.

Using PowerShell scripts to deploy your software

I often use PowerShell scripts to package a “release” of a software during a build because it gives me a lots of flexibility.

Different approaches for publishing Artifacts in build vNext

The advantage of using PowerShell is complete control over what will be included in the “release” package. This allows you to manipulate configuration files, remove unnecessary files, copy files from somewhere else in the repository, etc etc.

The aim of using PowerShell in a build is creating a single archive that contains everything needed to deploy a new release

This is the first paradigm, all the artifacts needed to install the software should be included in one or more zip archives. Once you have this, the only step that separates you from Continuous Deployment is creating another scripts that is capable of using that archive to install the software in the current system. This is the typical minimum set of parameters of such a script.

param(
    [string] $deployFileName,
    [string] $installationRoot
)

This scripts the name of the package file and the path where the software should be installed. More complex program accepts other configuration parameteres, but this is the simplest situation for a simple software that runs as a service in Windows and needs no configuration. The script does some preparation and then start the real installing phase.

$service = Get-Service -Name "Jarvis - Configuration Service" -ErrorAction SilentlyContinue 
if ($service -ne $null) 
{
    Stop-Service "Jarvis - Configuration Service"
    $service.WaitForStatus("Stopped")
}

Write-Output 'Deleting actual directory'

if (Test-Path $installationRoot) 
{
    Remove-Item $installationRoot -Recurse -Force
}

Write-Output "Unzipping setup file"
Expand-WithFramework -zipFile $file.FullName -destinationFolder $installationRoot

if ($service -eq $null) 
{
    Write-Output "Starting the service in $finalInstallDir\Jarvis.ConfigurationService.Host.exe"

    &amp; "$installationRoot\Jarvis.ConfigurationService.Host.exe" install
} 

First step is obtaining a reference to a Windows service called “Jarvis – Configuration Service”, if the service is present the script stops it and wait for it to be really stopped. Once the service is stopped it deletes current directory, and then, extract all the files contained in the zipped archive to the same directory. If the service was not present (it is the very first installation) it invokes the executable with the install option (we are using TopShelf).

The goal  of the script is being able to work for a first time installation, as well of subsequent installations.

A couple of aspect are interesting in this approach: first, the software does not have any specific configuration in the installation directory, when it is time to update, the script delete everything, and then copy the new version on the same path. Second, the script is made to work if the service was already installed, or if this is the very first installation.

Since this simple software indeed uses some configuration in the app.config file, it is duty of the scripts to reconfigure the script after the deploy.

$configFileName = $installationRoot + "\Jarvis.ConfigurationService.Host.exe.config"
$xml = (Get-Content $configFileName)
 
Edit-XmlNodes $xml -xpath "/configuration/appSettings/add[@key='uri']/@value" -value "http://localhost:55555"
Edit-XmlNodes $xml -xpath "/configuration/appSettings/add[@key='baseConfigDirectory']/@value" -value "..\ConfigurationStore"

$xml.save($configFileName)

This snippet of code uses an helper function to change the configuration file with XPath. Here there is another assumption: no-one should change configuration file, because it will be overwritten on the next installation. All the configurations that are contained in configuration file should be passed to the installation script.

The installation script should accept any configuration that need to be store in application configuration file. This is needed to allow for a full overwrite approac, where the script deletes all previous file and overwrite them with the new version.

In this example we change the port the service is listening and change the directory where this service will store configuration files (it is a configuration file manager). The configuration store is set to ..\ConfigurationStore, a folder outside the installation directory. This will preserve content of that folder on the next setup.

To simplify updates, you should ensure that it is safe to delete old installation folder and overwrite with a new one during upgrade. No configuration or no modification must be necessary in files that are part of the installation.

The script uses hardcoded values: port 55555 and ..\ConfigurationStore folder, if you prefer, you can pass these values as parameter of the installation script. The key aspect here is: Every configuration file that needs to be manipulated and parametrized after installation, should be placed in other directory. We always ensure that installation folder can be deleted and recreated by the script.

This assumption is strong, but avoid complication of  installation scripts, where the script needs to merge default settings of the new version of the software with the old settings of previous installation. For this reason, the Configuration Service uses a folder outside the standard installation to store configuration.

All scripts can be found in Jarvis Configuration Service project.

Gian Maria.