Azure DevOps multi stage pipeline environments

In a previous post on releasing with Multi Stage Pipeline and YAML code I briefly introduced the concept of environments. In that example I used an environment called single_env and you can be surprised that, by default, an environment is automatically created when the release runs.

This happens because an environment can be seen as sets of resources used as target for deployments, but in the actual preview version, in Azure DevOps, you can only add Kubernetes resources. The question is: why have I used an environment to deploy an application to Azure if there is no connection between the environment and your azure resources?

At this stage of the preview, we can only connect Kubernetes to an environment, no other physical resource can be linked.

I have two answer for this, the first is: Multi State Release pipeline in YAML is still in preview and we know that it is still incomplete, the other is: an environment is a set of information and rules, and rules can be enforced even if there is no a direct connection with physical resources.

image

Figure 1: Environment in Azure DevOps

As you see in Figure 1 a first advantage of environment is that I can immediately check its status. From the above picture I can immediately spot that I have a release successfully deployed on it. Clicking on the status opens pipeline details released on that environment.

Clicking on Environment name opens environment detail page, where you can view all information for the environment (name, and all release made on it) and it is possible to add resources and manage Security and Checks.

image

Figure 2: Security and Checks configuration for an environment.

Security is pretty straightforward, it is used to decide who can use and modify the environment, the real cool feature is the ability to create checks. If you click checks in Figure 2 you are redirected on a page that lists all the checks that need to be done before the environment can be used as a target for deploy.

image

Figure 3: Creating a check for the environment.

As an example I created a simple manual approval, put myself as the only approver and add some instruction. Once a check was created, it is listed in the Checks list for the environment.

image

Figure 4: Checks defined for the environment single_env

If I trigger another run, something interesting happens: after the build_test phase was completed , deploy stage is blocked by approval check.

image

Figure 4: Deploy stage was suspended because related environment has check to be fulfilled

Check support in environment can be used to apply deployment gates for my environments, like manual approval for the standard Azure DevOps classic release pipeline

Even if there is no physical link between the environment and my azure account where I’m deploying my application, azure pipeline detects that the environment has a check and block the execution of the script, as you can check in Figure 4.

Clicking on the check link in Figure 4 opens a details with all checks that should be done before continuing with deploy script. In Figure 5 you can check that the deploy is waiting for me to approve it, I can simply press the Approve button to have deploy script to start.

image

Figure 5: Checks for deploy stage

Once an agent is available, deploy stage can now start because I’ve done all check for related environment.

image

Figure 6: Deploy stage started, all the check are passed.

Once deploy operation finished, I can always verify checks, in Figure 7 I can verify how easy is to find who approved the release in that environment.

image

Figure 7: Verifying the checks for deploy after pipeline finished.

Actually the only check available is manual approval, but I’m expecting more and more checks to be available in the future, so keeps an eye to future release notes.

Gian Maria.

How to manage PowerShell installation scripts

In  previous post I explained how I like to release software using a simple paradigm:

build produces one zipped file with everything needed for a release, then a PowerShell scripts accepts the path of this zipped release and installation parameters and executes every step to install/upgrade the software.

This approach has numerous advantages, first of all you can always test script with PowerShell ISE in a Developer Machine. Just download from build artifacts the version you want to use for test, load installation script in PowerShell ISE, then run the script, and if something went wrong (the script has a bug or needs to be updated) just debug and modify it until it works.

My suggestion is to use Virtual Machines with snapshots. The process is,

restore a snapshot of the machine without the software installed, then run the script, if some error occurred, just restore the snapshot, fix the script, and run again.

You can do the very same using a snapshot of the VM where the software has a previous version of the software installed, so you can verify that the script works even for an upgrade, not only for a fresh installation. This is a really simple process, that does not involve any tool related to any release technology.

The ability to debug script using VM and snapshot, is a big speedup for release script development. If you are using some third part engine for Release Management software, probably you will need to trigger a real release to verify your script. Another advantage is that this process allows you to do a manual installation where you can simply launch the script and manually verify if everything is good.

You should store all scripts along with source control, this allows you to:

1) Keep scripts aligned with the version of the software they install. If a modification of the software requires change in installation scripts, they are maintained togheter.
2) You can publish script directory as build artifacts, so each build contains both the zipped release, and the script needed to install it.
3) History of the scripts allows you to track the entire lifecycle of the script, and you can understand why someone changed the script in version Y of your software.
4) You can test installation script during builds or run during a Build for a quick release on a test environment

Final advantage is: the script runs on every Windows machine, without the need to use tasks, agents, or other stuff. Once the script is ready to go, you first start testing in DEV, QA and Production environment manually. Manual installation is really simple, just download artifacts from the build, run the script, check script log and manually checks that everything is ok. If something went bad (in DEV or QA) open a Bug and let Developers fix the script until everything is ok.

Once the script start to be stable, you can proceed to automate the release.

Gian Maria.

Checklists are prerequisites for Release Automation

I’ve dealt in some posts on how to deploy an application with a PowerShell script that uses an archive produced by a build. Automating a release could be simple or complex, depending on the nature of the software to be deployed, but there is a single suggestion that I always keep in my mind:

If you don’t have one or more Checklists for manual installation of a software do not even try to autmate installation process

Automatic Release and Continuous Deployment is a real key part of DevOps culture, but before starting to write automation scripts, it is necessary to ask yourself: Do I really know every single step to be done in order to release the software?

This seems a simple question, but in my experience people always tends to forget installation steps: obscure settings in windows registry or in some files, operating system settings, prerequisites, service pack and so on.

Since the devil is in the details, deploying a software manually without a cheklist is almost impossible, because the knowlege on “how to deploy the software” is scattered among team members.

Scripts are stupid, there is no human intelligence behind a script, it simply does what you told it to do, nothing more.

Before even try to write a script, you should write one or more CheckLists for the installation: PreRequisite Checklist, Installation Checklist, Post Installation Checklist and so on. Each checklist should contain a series of tests and tasks to be accomplished in order to release the software. If you are not able to take a person, give her/him installation checklists and have her/him install the software successfully, how you can think to write a script that automate the process?

The correct process to create automated script is:

1) Generate PreRequisite, Installation, PostInstallation CheckList.
2) Run them manually until they are complete and correct, write down the time needed for each step
3) Start automating the most time consuming steps.
4) Proceed until every step is automated.

Usually it is not necessary to immediately try to automate everything, as an example, if you use virtual machine, you can use golden images to use machines with all prerequisites already installed. This simply the deployment process because you can avoid writing PowerShell DSC, Puppet or Chef scripts to install Prerequisites.

If a specific installation step is difficult to automate (because there is some human intelligence behind it) let this task to be executed manually, but automate other expensive part of the checklist. There is always value in automating everything that can be easily automated.

Try to use Virtual Machine templates and sysprepped VM to simplify setup of prerequisites. Start creating automation for operations that are repeated for each upgrade, not those ones that are done only once.

Step by step you will eventually reach the point where you automated everything, and you can start to do Continous Deployment. If you followed CheckList and rigorous process, your deployment environment will be stable. If you immediately jump in writing automation scripts, even before knowing all the operation required to install the software (after all it is a simple web site), you will end with a fragile Automatic Deployment Environment, that will eventually require more maintenance time than a manual install.

Gian Maria.

Running Unit Tests on different machine during TFS 2015 build

First of all I need to thanks my friend Jackob Ehn that pointed me to the right direction to create a particular build.  In this post I’ll share with you my journey to run tests on a different machine than the one that is running the build.

For some build it is interesting to have the ability to run some Unit Test (nunit in my scenario) on a machine different from that one that is running the build. There are a lot of legitimate reasons for doing this, for a project I’m working with, to run a set of test I need to have a huge amount of pre-requisites installed (LibreOffice, ghostscript, etc). Instead of installing those prerequisite on all agent machines, or install those one on a single build agent and using capabilities, I’d like to being able to run the build on any build agent, but run the test in a specific machine that had all the prerequisite installed.

Sometimes it is necessary to run tests during build on machine different from that one where the build agent is running.

The solution is quite simple, because VSTS / TFS already had all build tasks needed to solve my need and to execute tests on different machine.

The very first steps is copying all the dlls that contains tests on the target machine, this is accomplished by the Windows Machine File Copy task.

image 

Figure 1: File copy task in action

This is a really simple task, the only suggestion is to never specify the password in clear format, because everyone that can edit the build can read the password. In this situation the password is stored in the RmTestAdminPassword variable, and that variable is setup as secret.

image

Figure 2: Store sensitive information as secret variables of the build

Then we need to add a Visual Studio Test Agent Deployment task, to deploy Visual Studio Test Runner on target machine.

image

Figure 3: Visual Studio Test Agent Deployment

Configuration is straightforward, you need to specify a machine group or a list of target machines (point 1), then you should specify the user that will be used to run test agent (point 2), finally I’ve specified a custom location in my network for the Test Agent Installer. If you do not specify anything, the agent is downloaded from http://go.microsoft.com/fwlink/?LinkId=536423 but this will download approximately 130MB of data. For faster build it would be preferrable to download the agent and move the installer in a shared network folder to instruct the Task to grab the agent from that location.

Finally you use the Run Functional Tests task to actually execute tests in the target machine.

image

Figure 4: Running functional test from target machine configuration

You specify the machine(s) you want to use (point 1), then all the dll that contains tests (point 2) and you can also specify code coverage (point 3). Even if the task is called Run Functional Tests, it actually use Visual Studio Test runner to run tests, so you can run whatever test you like.

Thanks to TFS 2015 / VSTS build, we already have all tasks needed to run Unit Tests on target machines.

If you are running Nunit test or whatever test framework different from MsTest, this task will fail, because the target machine has no test adapter to run the test. The failure output tells you that the agent was not capable of finding any test to run in specified location. This happens even if you added Nuget Nunit adapter to the project. The solution is simple, first of all locates all needed dll in package location of your project.

image

Figure 5: Installing Nunit TestAdapter Nuget package, downloads all required dll to your machine

Once you located those four dll in your HD, you should copy them to the target machine in folder: C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\Extensions, this folder was created by Visual Studio Test Agent deployment task and will contain all extension that will be automatically loaded by Visual Studio Test Agent. Once you copied the dll, those machine will be able to run nunit test without problem.

Once you copied all required dll in target folder, re-run the build, and verify that tests are indeed executed on the target machine.

SNAGHTML3c4ad0

Figure 6: Output of running tests on remote machine

Test output is transferred to the build machine, and attached to the build result as usual, so you do not need anything else to visualize test result in the same way as if the test were executed by agent machine.

image

Figure 7: Output of test is included in the build output like standard Unit Tests ran by the build agent

Once tasks are in place, everything is carried over by the test agent, test results are downloaded and attached to the build results, as for standard unit tests executed on Build Agent machine.

The only drawback of this approach is that it needs some times (in my system about 30 seconds) before the test started execution in target machine, but apart this problem, you can execute tests on remote machine with little effort.

Gian Maria.

Create a Release with PowerShell, Zipped Artifacts and Chocolatey

In previous post I described how to create a simple PowerShell scripts that is capable of installing a software starting from a zipped file that contains the “release” of a software (produced by a build) and some installation parameters. Once you have this scenario up and running, releasing your software automatically is quite simple.

Once you automated the installation with a PowerShell script plus an archive file with Everything is needed to install the software, you are only a step away from Continuous Deployment.

A possible release vector is Chocolatey, a package manager for Windows based on NuGet. I’m not a Chocolatey expert, but at the very basic level, to create a chocolatey package you should only have a bunch of files and a PowerShell script that install the software.

The most basic form of a Chocolatey package is composed by files that contains artifacts to be installed and a PowerShell script that does the installation.

The very first step is creating a .nuspec file that will define what will be contained in Chocolatey package. Here is a possible content of release.nuspec file.

<package>
  <metadata>
  …
  </metadata>

   <files>
        <file src=”Tools\Setup\chocolateyInstall.ps1″ target=”Tools” />
        <file src=”Tools\Setup\ConfigurationManagerSetup.ps1″ target=”Tools” />
        <file src=”Tools\Setup\JarvisUtils.psm1″ target=”Tools” />
        <file src=”release\Jarvis.ConfigurationService.Host.zip” target=”Artifacts” />
  </files>

</package>

The real interesting part of this file is the <files> section, where I simply list all the files I want to be included in the package. ConfigurationManagerSetup.ps1 and JarvisUtils.psm1 are the original installation script I wrote, and Jarvis.ConfigurationService.Host.zip is the name of the artifacts generated by the PackRelease.ps1 script.

To support Chocolatey you only need to create another script, called chocolateyInstall.ps1 that will be called by chocolatey to install the software. This script is really simple, it needs to parse Chocolatey input parameters and invoke the original installation script to install the software.

Write-Output "Installing Jarvis.ConfigurationManager service. Script running from $PSScriptRoot"

$packageParameters = $env:chocolateyPackageParameters
Write-Output "Passed packageParameters: $packageParameters"

$arguments = @{}

# Script used to parse parameters is taken from https://github.com/chocolatey/chocolatey/wiki/How-To-Parse-PackageParameters-Argument

# Now we can use the $env:chocolateyPackageParameters inside the Chocolatey package
$packageParameters = $env:chocolateyPackageParameters

# Default the values
$installationRoot = "c:\jarvis\setup\ConfigurationManager\ConfigurationManagerHost"

# Now parse the packageParameters using good old regular expression
if ($packageParameters) {
    $match_pattern = "\/(?([a-zA-Z]+)):(?([`"'])?([a-zA-Z0-9- _\\:\.]+)([`"'])?)|\/(?([a-zA-Z]+))"
    $option_name = 'option'
    $value_name = 'value'

    Write-Output "Parameters found, parsing with regex";
    if ($packageParameters -match $match_pattern )
    {
       $results = $packageParameters | Select-String $match_pattern -AllMatches
       $results.matches | % {
            
            $arguments.Add(
                $_.Groups[$option_name].Value.Trim(),
                $_.Groups[$value_name].Value.Trim())
       }
    }
    else
    {
        Throw "Package Parameters were found but were invalid (REGEX Failure)"
    }

    if ($arguments.ContainsKey("installationRoot")) {

        $installationRoot = $arguments["installationRoot"]
        Write-Output "installationRoot Argument Found: $installationRoot"
    }
} else 
{
    Write-Output "No Package Parameters Passed in"
}

Write-Output "Installing ConfigurationManager in folder $installationRoot"

$artifactFile = "$PSScriptRoot\..\Artifacts\Jarvis.ConfigurationService.Host.zip"

if(!(Test-Path -Path $artifactFile ))
{
     Throw "Unable to find package file $artifactFile"
}

Write-Output "Installing from artifacts: $artifactFile"

if(!(Test-Path -Path "$PSScriptRoot\ConfigurationManagerSetup.ps1" ))
{
     Throw "Unable to find package file $PSScriptRoot\ConfigurationManagerSetup.ps1"
}


if(-not(Get-Module -name jarvisUtils)) 
{
    Import-Module -Name "$PSScriptRoot\jarvisUtils"
}

&amp; $PSScriptRoot\ConfigurationManagerSetup.ps1 -deployFileName $artifactFile -installationRoot $installationRoot

This script does almost nothing, it delegates everything to the ConfigurationManagerSetup.ps1 file. Once the .nuspec file and this script are writenn, you can use nuget.exe to manually generate a package and verify installing in a target system.

When everything is ok and you are able to create and install a chocolatey package locally, you can schedule Chocolatey package creation with a TFS Build. The advantage is automatic publishing and SemVer version management done with Gitversion.exe.

image 

Figure 1: Task to generate Chocolatey Package.

To generate Chocolatey Package you can use the standard NuGet Packager task, because Chocolatey uses the very same technology as NuGet. As you can see from Figure 1 the task is really simple and needs very few parameters to work. If you like more detail I’ve blogged in the past about using this task to publish NuGet packages.

Publish a Nuget Package to NuGet/MyGet with VSO Build

This technique is perfectly valid for VSTS or TFS 2015 on-premises. Once the build is finished, and the package is published, you should check on NuGet or MyGet if the package is ok.

image

Figure 2: Package listing on MyGet, I can verify all published versions

Now you can install with this command

choco install Jarvis.ConfigurationService.Host
 	-source 'https://www.myget.org/F/jarvis/api/v2'
	-packageParameters "/installationRoot:C:\Program files\Proximo\ConfigurationManager" 
   -force 

All parameters needed by the installation scripts should be specified with the –packageParameters command line options. It is duty of chocolateyInstall.ps1 parsing this string and pass all parameters to the original installation script.

If you want to install a prerelease version (the ones with a suffix) you need to specify the exact version and use the –pre parameter.

A working example of scripts and nuspec file can be found in ConfigurationManager project.

Gian Maria.