Azure DevOps multi stage pipeline environments

In a previous post on releasing with Multi Stage Pipeline and YAML code I briefly introduced the concept of environments. In that example I used an environment called single_env and you can be surprised that, by default, an environment is automatically created when the release runs.

This happens because an environment can be seen as sets of resources used as target for deployments, but in the actual preview version, in Azure DevOps, you can only add Kubernetes resources. The question is: why have I used an environment to deploy an application to Azure if there is no connection between the environment and your azure resources?

At this stage of the preview, we can only connect Kubernetes to an environment, no other physical resource can be linked.

I have two answer for this, the first is: Multi State Release pipeline in YAML is still in preview and we know that it is still incomplete, the other is: an environment is a set of information and rules, and rules can be enforced even if there is no a direct connection with physical resources.

image

Figure 1: Environment in Azure DevOps

As you see in Figure 1 a first advantage of environment is that I can immediately check its status. From the above picture I can immediately spot that I have a release successfully deployed on it. Clicking on the status opens pipeline details released on that environment.

Clicking on Environment name opens environment detail page, where you can view all information for the environment (name, and all release made on it) and it is possible to add resources and manage Security and Checks.

image

Figure 2: Security and Checks configuration for an environment.

Security is pretty straightforward, it is used to decide who can use and modify the environment, the real cool feature is the ability to create checks. If you click checks in Figure 2 you are redirected on a page that lists all the checks that need to be done before the environment can be used as a target for deploy.

image

Figure 3: Creating a check for the environment.

As an example I created a simple manual approval, put myself as the only approver and add some instruction. Once a check was created, it is listed in the Checks list for the environment.

image

Figure 4: Checks defined for the environment single_env

If I trigger another run, something interesting happens: after the build_test phase was completed , deploy stage is blocked by approval check.

image

Figure 4: Deploy stage was suspended because related environment has check to be fulfilled

Check support in environment can be used to apply deployment gates for my environments, like manual approval for the standard Azure DevOps classic release pipeline

Even if there is no physical link between the environment and my azure account where I’m deploying my application, azure pipeline detects that the environment has a check and block the execution of the script, as you can check in Figure 4.

Clicking on the check link in Figure 4 opens a details with all checks that should be done before continuing with deploy script. In Figure 5 you can check that the deploy is waiting for me to approve it, I can simply press the Approve button to have deploy script to start.

image

Figure 5: Checks for deploy stage

Once an agent is available, deploy stage can now start because I’ve done all check for related environment.

image

Figure 6: Deploy stage started, all the check are passed.

Once deploy operation finished, I can always verify checks, in Figure 7 I can verify how easy is to find who approved the release in that environment.

image

Figure 7: Verifying the checks for deploy after pipeline finished.

Actually the only check available is manual approval, but I’m expecting more and more checks to be available in the future, so keeps an eye to future release notes.

Gian Maria.

Release app with Azure DevOps Multi Stage Pipeline

MultiStage pipelines are still in preview on Azure DevOps, but it is time to experiment with real build-release pipeline, to taste the news. The Biggest limit at this moment is that you can use Multi Stage to deploy in Kubernetes or in the cloud, but there is not support for agent in VM (like standard release engine). This support will be added in the upcoming months but if you use azure or kubernetes as a target you can already use it.

My sample solution is in GitHub, it contains a real basic Asp.NET core project that contains some basic REST API and a really simple angular application. On of the advantage of having everything in the repository is that you can simply fork my repository and make experiment.

Thanks to Multi Stage Pipeline we finally can have build-test-release process directly expressed in source code.

First of all you need to enable MultiStage Pipeline for your account in the Preview Features, clicking on your user icon in the upper right part of the page.

image

Figure 1: Enable MultiStage Pipeline with the Preview Features option for your user

Once MultiStage Pipeline is enables, all I need to do is to create a nice release file to deploy my app in azure. The complete file is here https://github.com/alkampfergit/AzureDevopsReleaseSamples/blob/develop/CoreBasicSample/builds/build-and-package.yaml and I will highlight the most important part here. This is the starting part.

image 

Figure 2: First part of the pipeline

One of the core differences from a standard pipeline file is the structure of jobs, after trigger and variables, instead of directly having jobs, we got a stages section, followed by a list of stages that in turns contains jobs. In this example the first stage is called build_test, it contains all the jobs to build my solution, run some tests and compile Angular application. Inside a single stage we can have more than one job and in this particular pipeline I divided the build_test phase in two sub jobs, the first is devoted to build ASP.NET core app, the other will build the Angular application.

image

Figure 3: Second job of first stage, building angular app.

This part should be familiar to everyone that is used to YAML pipeline, because it is, indeed, a standard sequences of jobs; the only difference is that we put them under a stage. The convenient aspect of having two distinct jobs, is that they can be run in parallel, reducing overall compilation time.

If you have groups of taks that are completely unrelated, it is probably bettere to divide in multiple jobs and have them running in parallel.

The second stage is much more interesting, because it contains a completely different type of job, called deployment, used to deploy my application.

image

Figure 4: Second stage, used to deploy the application

The dependsOn section is needed to specify that this stage can run only after build_test stage is finished. Then it starts jobs section that contains a single deployment job. This is a special type of job where you can specify the pool, name of an environment and then a strategy of deploy; in this example I choose the simplest, a run once strategy composed by a list of standard tasks.

If you ask yourself what is the meaning of environment parameter, I’ll cover it in much extension on a future post, for this example just ignore it, and consider it as a way to give a name to the environment you are deploying.

MultiStage pipeline introduced a new job type called deployment, used to perform deployment of your application

All child steps of deployment job are standard tasks used in standard release, the only limitation of this version is that they run on the agent, you cannot run on machine inside environment (you cannot add anything else than kubernetes cluster to an environment today).

The nice aspect is that, since this stage depends on build_test, when deployment section runs, it automatically download artifacts produced by previous stage and place them in folder $(Pipeline.Workspace) followed by another subdirectory that has the name of the artifacts itself. This solves the need to transfer artifact of the first stage (build and test) to deployment stage

image

Figure 5: Steps for deploying my site to azure.

Deploying the site is really simple, I just unzip asp.NET website to a subdirectory called FullSite, then copy all angular compiled file in www folder and finally use a standard AzureRmWebAppDeployment to deploy my site to my azure website.

Running the pipeline shows you a different user interface than a standard build, clearly showing the result for each distinct stage.

image

Figure 6: Result of a multi stage pipeline has a different User Interface

I really appreciate this nice graphical representation of how the stage are related. For this example the structure is is really simple (two sequential steps), but it shows clearly the flow of deployment and it is invaluable for most complex scenario. If you click on Jobs you will have the standard view, where all the jobs are listed in chronological order, with the Stage column that allows you to identify in which stage the job was run.

image

Figure 7: Result of the multi stage pipeline in jobs view

All the rest of the pipeline is pretty much the same of a standard pipeline, the only notable difference is that you need to use the stage view to download artifacts, because each stage has its own artifacts.

image

Figure 8: Downloading artifacts is possible only in stages view, because each stage has its own artifacs.

Another nice aspect is that you can simply rerun each stage, useful is some special situation (like when your site is corrupted and you want to redeploy without rebuilding everything)

Now I only need to check if my sites was deployed correctly and … voilà everything worked as expected, my site is up and running.

image

Figure 9: Interface of my really simple sample app

Even if MultiStage pipeline is still in preview, if you need to deploy to azure or kubernetes it can be used without problem, the real limitation of actual implementation is the inability to deploy with agents inside VM, a real must have if you have on-premise environment.

On the next post I’ll deal a little more with Environments.

Gian Maria.

Build and Deploy Asp.Net App with Azure DevOps

I’ve blogged in the past about deploying ASP.NET application, but lots of new feature changed in Azure DevOps and it is time to do some refresh of basic concepts. Especially in the field of web.config transform there is always lots of confusion and even if I’m an advocate of removing every configuration from files and source, it is indeed something that worth to be examined.

The best approach for configuration is removing then from source control, use configuration services, etc and move away from web.config.

But since most people still use web.config, lets start with a standard ASP.NET application with a Web.Config and a couple of application settings that should be changed during deploy.

image

Figure 1: Simple configuration file with two settings

When it is time to configure your release pipeline, you MUST adhere to the mantra: Build once, deploy many. This means that you should have one build that prepares the binaries to be installed, and the very same binaries will be deployed in several environment.

Since each environment will have a different value for app settings stored in web.config, I’ll start creating a web config transform for the Release configuration (then one that will be released), changing each configuration with a specific token.

image

Figure 2: Transformation file that tokenize the settings

In Figure 2 I show how I change the value of Key1 setting to __Key1__ and Key2 to __Key2__. This is necessary because I’ll replace these value with the real value during release.

The basic trick is changing configuration values in files during the build, setting some tokenized value that will be replaced during release. Using double underscore as prefix and suffix is enough for most situations.

Now it is time to create a build that generates the package to install. The pipeline is really simple, the solution is build with MsBuild with standard configuration for publishing web site. I’ve used MsBuid and not Visual Studio Task, because I do not want to have Visual Studio on my build agent to build, MsBuild is enough.

image

Figure 3: Build and publish web site with a standard MsBuild task.

If you run the build you will be disappointed because resulting web.config is not transformed, but it remains with the very content of the one in source control. This happens because transformation is an operation that is not done during standard web site publishing, but from Visual Studio when you use publish wizard. Luckly enough there is a task in preview that performs web.config transformation, you can simply place this task before MsBuild task and the game is done.

image

Figure 4: File transform task is in preview but it does its work perfectly

As you can see in Figure 4, you should simply specify the directory of the application, then choose XML transformation and finally the option to use web.$(BuildConfiguration).config transformation file to transform web.config.

Now you only need to copy the result of the publish into the artifact staging directory, then upload with standard upload artifact task.

image

Figure 5: Copy result of the publish task into staging directory and finally publish the artifact.

If you read other post of my blob you know that I usually place a PowerShell script that reorganize files, compress etc, but for this simple application it is perfectly fine to copy the _PublishedWebsites/ directory as build artifact.

image

Figure 6: Published artifacts after the build completes.

Take time to verify that the output of the build (Artifacts) is exactly what you expected before moving to configure the release.

Before going to build the release phase, please download the web.config file and verify that the substitution were performed and web.config contains what you expected.

image

Fiure 7: Both of my settings were substituted correctly.

Now it is time to create the release, but first of all I suggest you to install this extension  that contains a nice task to perform substitution during a release in an easy and intuitive way.

One of the great power of Azure DevOps is extensibility, there are tons of custom task to perform lots of different task, so take time and look in the marketplace if you are not able to find the Task you need from basic ones.

Lets start creating a simple release that uses the previous build as artifact, and contains two simple stages, dev and production.

image

Figure 8: Simple release with two stages to deploy the web application.

Each of the two stages have a simple two task job to deploy the application and they are based on the assumption that each environment was already configured (IIS installed, site configure etc), so, to deploy our asp.net app, we can simply overwrite the old installation folder and replace with the new binaries.

The Replace Token task comes in hand in this situation, you simply need to add as the first task of the job (before the task that copies file into IIS directory), then configure prefix and suffix with the two underscore to match criteria used to tokenize configuration in web.config

image

Figure 9: Configure replace token suffix and prefix to perform substitution.

In this example only web.config should be changed, but the task can perform substitution on multiple files.

image

Figure 10: Substition configuration points to web.config file.

The beautiful aspect of transform task is that it uses all the variables of the release to perform substitution. For each variable it replace token using prefix and suffix, this is the reason of my transformation release file in the build; my web.config file has __Key1__ and __Key2__ token inside configuration, so I can simply configure those two variables differently for the two environment and my release is finished.

If you use Grid visualization it is immediate to understand how each stage is configured.

image

Figure 11: Configure variables for each stage, the replace task will do the rest.

Everything is done, just trigger a release and verify that the web config of the two stages is changed accordingly.

image

Figure 12: Sites deployed in two stages with different settings, everything worked as expected.

Everything worked good, I was able to build once with web.config tokenization, then release the same artifacts in different stages with different configurations managed by release definition.

Happy AzDo

Mounting network share in Release Definition

Using Deployment Groups with Release Management in VSTS is really nice, because you can use a pull release model, where the agent is running on machines that are deployment target, and all scripts are executed locally (instead of using PowerShell Remoting and WinRM).

A typical release definition depends on artifacts produced by a build and with VSTS sometimes it is convenient to store build artifacts in a network share instead that on VSTS. This is especially true if, like me, you have a internet connection with really slow upload bandwidth (256 Kbps). Storing artifacts in network share reduce the time needed from the build to upload artifact and the time needed by the release to download them to almost few seconds.

Storing build artifacts in network share is really useful in situation where internet bandwidth is limited.

In this scenario, if the machines that belongs to Deployment Groups are outside your domain you have authentication problem when the release process try to access the network share to download the artifacts. Here the error I have when triggering a release.

Downloading artifacts failed: Microsoft.VisualStudio.Services.Agent.Worker.Release.Artifacts.ArtifactDownloadException: 
The artifact directory does not exist: \\neuromancer\Drops\VSO\Jarvis - CI - Package For UAT Test\JarvisPackage debug - 2.1.0-sprint7-team.2078.
 It can happen if the password of the account JVSTSINT\Administrator is changed recently and is not updated for the agent. 

This error is clear, the user that runs the agent in the Deployment Groups is not part of the domain, thus it cannot access a network share that is part of the domain.

Storing artifacts in a network share is useful to reduce bandwidth, but you need to be sure that all agents have access to it.

I want to solve this problem without the need to join the machine to the domain or to configure in some special way the agent on the machine, my goal is resolving this problem inside the release definition.

To solve this problem you can simply use the net use command line tool, that is used to map a network share with specific credentials, but the download artifact phase of the release takes part before any task and the release will fail before any of your task has the opportunity to run.

SNAGHTML5877fcc

Figure 1: Task used to map the network share.

A quick solution to this problem is inserting a dedicated Deployment Group phase (Figure 1) before any other phase, call this phase “mount network share” (1) , add a simple Command Line task (2) and finally be sure to select the “Skip download of artifacts” (3) option. Point 3 is the most important one, because downloading artifacts takes place before the execution of any task.

Then I declare a couple of release variables to store username and password of a user that have access to that share (in my domain I have a dedicated TfsBuild account).

image

Figure 2: Variables to mount network share with a valid domain user

Now I only need to configure the Command Line task to use the net use command to mount the network share with the user specified in release variables. The configuration is straightforward and is represented in Figure 3.

image

Figure 3: Configuration of Command Line task to use net use command

Thanks to the net use command, the release is able to mount the network share in each machine of Deployment Group using the TfsBuild user. You can verify from release logs, that the Command Line task correctly run and maps the network share.

image

Figure 4: Net use command in action in build logs.

Using a special Deployment Group phase with “Skip download of artifacts” selected allows you to run any task you need before the download of the artifacts takes place.

Gian Maria.

Running UAT tests in a VSTS / TFS release

I’ve blogged on how to run UAT and integration tests during a VSTS Build; that solution works quite well but probably is not the right way to proceed. Generally speaking that build does its work but I have two main concerns.

1) Executing test with remote execution requires installation of test agent and involves WinRm, a beast that is not so easy to tame outside a domain

2) I’m deploying the new version of the application with an XCopy deployment, that is different from a real deploy to production.

The second point is the one that bothers me, because we already deploy in production with PowerShell scripts and I’d like to use the very same scripts to deploy on the machine used for the UAT testing. Using the same script used for real release will put those script also under testing.

If you want to run UAT and integration testing, the best scenario is when you install the new version of the application with the very same script you use to deploy on production.

If you have a (script, whatever) to automatically release a new version of your application, it is really better to use a different strategy to run the UAT test in VSTS / TFS: instead of using a build you should use release management. If you still do not have scripts or whatever to automatically release your application, but you have UAT tests to run automatically, it is time to allocate time to automate your deployment. This is a needed prerequisite to automate running of UAT and will simplify your life.

The first step is a build that prepare the package with all the files that are needed by the installation, in my situation I have a couple of .7z files: the first contains all the binaries and the other contains all updated configurations. These are the two files that I use for deployment with PowerShell script. The script is quite simple, it stops services, backup actual version, deletes everything, replace binaries with latest version, then update configuration with the new default values if any. It is not rocket science, it is a simple script that automate everything we have on our release list.

Once you have prerequisites (build creating binaries and installation scripts), running UAT tests in a release is really simple, a simple dependency from build artifacts, a single environment and the game is done.

image

Figure 1: General schema for the release that will run UAT tests.

I’m depending by the artifact of a single build, specially crafted for UAT. To run UAT testing I need the .7z files with the new release of the software, but I need also a .7z file with all the UAT tests (nunit dll files and test adapter) needed to run the tests and all installation scripts.

To simplify everything I’ve cloned the original build that I used to create package for new release and I’ve added a couple of tasks to package UAT test files.

image

Figure 2: Package section of the build

I’ve blogged a lot in the past of my love with PowerShell scripts to create package used for release. This technique is really simple, you can test scripts outside of build management, it is super easy to integrate in whatever build engine you are using and with PowerShell you can do almost everything. In my source code I have two distinct PowerShell package script, the first creates package with the new binaries the second one creates a package with all UAT assemblies as well as NUnit tests adapters. All the installation scripts are simply included in the artifacts directly from source code.

Build for UAT produces three distinct artifacts, a compressed archive with new version to release, a compressed archive with everything needed to run UAT tests and the uncompressed folder with all installation scripts.

When the build is stable, the next step is configuring a Deployment Group to run UAT. The concept of Deployment Group is new in VSTS and allows you to specify a set of machines, called deployment group, that will be used in a release definition. Once you create a new Deployment Group you can simply go to the details page to copy a script that you can run on any machine to join it to that deployment group.

SNAGHTML52853f7

Figure 3: Script to join a machine to a Deployment Group

As you can see from Figure 3, you can join Windows machines or a Ubuntu or RedHat machines to that group. Once you run the script that machine will be listed as part of the Group as you can see in Figure 4.

image

Figure 4: Deployment groups to run UAT tests.

The concept of Deployment Group is really important, because it allows for pull deployment instead of push deployment. Instead of having an agent that will remotely configure machines, we have the machines of Deployment Group that will download artifacts of the build and runs the build locally. This deployment method will completely remove all WinRM issues, because the release scripts are executed locally.

When designing a release, a pull model allows you to run installation scripts locally and this lead to more stability of the release process.

There are another advantages of Deployment Groups, like executing in parallel to all machines of a group. This MSDN post is a good starting point to learn of all the goodness of DG.

Once the Deployment Group is working, creating a release is really simple if you already created PowerShell scripts for deployment. The whole release definition is represented in Figure 5.

SNAGHTML52522c4

Figure 5: Release definition to run UAT testing

First of all I run the installer script (it is an artifacts of the build so it is downloaded locally), then I uncompress the archive that contains UAT tests and delete the app_offline.htm files that was generated by the script to bring the IIS website offline during the installation.

Then I need to modify a special .application files that is used to point to a specific configuration set in the UAT machine. That step is used because the same machine is used to run UAT tests during a release or during a Build (with the technique discussed in previous post) so I need to run the UAT testing with two different sets of parameters.

Then I run another PowerShell script that will change the Web.config of the application to use Forms Authentication instead of Integrated authentication (we use fake users during UAT). After this steps everything is ready to run UAT tests and now I can run them using standard Visual Studio Test task, because the release script will be run locally in the machines belonging to deployment Group.

Most of the steps are peculiar to this specific application, if your application is simpler, like a simple IIS application, probably the release will be even simpler, in my situation I need to install several windows services, updating an IIS application another angular application, etc etc.

If you configure that release to start automatically as soon as new artifacts is present, you can simply trigger the build and everything will run automatically for you. Just queue the build and you will end with a nice release that contains results of your UAT tests.

image

Figure 6: Test result summary in release detail.

This technique is superior respect running UAT tests during a standard build; first of all you do not need to deal with WinRM, but the real advantage is continuously testing your installation scripts. If for some reason a release script does not work anymore, you will end with a failing release, or all UAT tests will fail because the application was not installed correctly.

The other big advantage is having the tests running locally with the standard Visual Studio Test runner, instead of dealing with remote test execution, that is slow and more error prone.

The final great advantage of this approach, is that you gain confidence of your installation scripts, because they are run constantly against your code, instead of being run only when you are actually releasing a new version.

As a final notice, Deployment Groups is a feature that, at the time I’m writing this post, is available only for VSTS and not for TFS.

Gian Maria.