Azure DevOps gems, YAML Pipeline and Templates

If you read my blog you already know that I’m a great fan of YAML Pipeline instead of using Graphic editor in the Web UI, there are lots of reasons why you should use YAML; one for all the ability to branch Pipeline definition with code, but there is another really important feature: templates.

There is a really detailed documentation on MSDN on how to use this feature, but I want to give you a complete walkthrough on how to start to effectively use templates. Thanks to templates you can create a standard build definition with steps or jobs and steps in a template file, then reference that file from real build, just adding parameters.

The ability to capture a sequence of steps in a common template file and reuse it over and over again in real pipeline is probably one of the top reason for moving to YAML template.

One of the most common scenario for me is: account with lots of utilities projects (multitargeted for full framework and dotnetstandard), each one with its git repository and the need for a standard CI definition to:

1) Build the solution
2) Run tests
3) Pack a Nuget Package with semantic versioning
4) Publish Nuget Package inside an Azure DevOps private package repository

If you work on big project you usually have lots of these small projects: Utilities for Castle, Serilog, Security, General etc. In this scenario it is really annoying to define a pipeline for each project with Graphical editor, so it is pretty natural moving to YAML. You can start from a standard file, copy it in the repository and then adapt for the specific project, but when a task is updated, you need to re-update all the project to update all the reference. With this approach the main problem is: after some time the builds are not anymore in sync and each project start to behave differently.

I start defining my template once, in a dedicated repository, then I can reuse it in any project. When the template changes, I want to be able to manually update all pipelines to reference the new version or, even better, decide which project will be updated automatically.

Lets start with the real build file, that is included in the real repository and lets check how to reference a template stored in another repository. The only limit is that the repository should be in the same organization or in GitHub. Here is full content of the file.

trigger:
- master
- develop
- release/*
- hotfix/*
- feature/*

resources:
  repositories:
    - repository: templatesRepository
      type: git
      name: Jarvis/BuildScripts
      ref: refs/heads/hotfix/0.1.1

jobs:

- template: 'NetStandardTestAndNuget.yaml@templatesRepository'
  
  parameters:
    buildName: 'JarvisAuthCi'
    solution: 'src/Jarvis.Auth.sln'
    nugetProject: 'src/Jarvis.Auth.Client/Jarvis.Auth.Client.csproj'
    nugetProjectDir: 'src/Jarvis.Auth.Client'

The file is really simple, it starts with the triggers (as for a standard YAML build), then it comes a resources section, that allows you to references objects that lives outside the pipeline; in this specific example I’m declaring that this pipeline is using a resource called templateRepository, an external git repository (in the same organization)  called BuildScripts and contained in Team Project called Jarvis; finally the ref property allows me to choose the branch or the tag to use with standard refs git syntax (refs/heads/master, refs/heads/develop, refs/tags/xxxx, etc). In this specific example I’m freezing the version of the build script to the tag 0.1.0, if the repository will be upgraded this build will always reference version 0.1.0. This imply that, if I change BuildScripts repository, I need to manually update this build to reference newer version. If I want this definition to automatically use new versions  I can simply reference master or develop branch.

The real advantage to have the template versioned in another repository is that it can use GitFlow so, every pipeline that uses the template, can choose to use specific version, or latest stable or even latest development.

Finally I start to define Jobs, but instead of defining them inside this YAML file I’m declaring that this pipeline will use a template called NetStandardTestAndNuget.yaml contained in the resource templatesRepository. Following template reference I specify all the parameters needed by the template to run. In this specific example I have four parameters:

buildName: The name of the build, I use a custom Task based on gitversion that will rename each build using this parameter followed by semversion.
solution: Path of solution file to build
nugetProject: Path of the csproject that contains the package to be published
nugetProjectDir: Directory of csproject to publish

The last parameter could be determined by the third, but I want to keep YAML simple, so I require the user of the template to explicitly pass directory of the project that will be used as workingDirectory parameter for dotnet pack command.

Now the real fun starts, lets examine template file contained in the other repository. Usually a template file starts with a parameters section where it declares the parameters it is expecting.

parameters:
  buildName: 'Specify name'
  solution: ''
  buildPlatform: 'ANY CPU'
  buildConfiguration: 'Release'
  nugetProject: ''
  nugetProjectDir: ''
  dotNetCoreVersion: '2.2.301'

As you can see the syntax is really simple, just specify name of the parameter followed by the default value. In this example I really need four parameters, described in the previous part.

Following parameters section a template file can specify steps or event entire jobs, in this example I want to define two distinct jobs, one for build and run test and the other for nuget packaging and publishing

jobs:

- job: 'Build_and_Test'
  pool:
    name: Default

  steps:
  - task: DotNetCoreInstaller@0
    displayName: 'Use .NET Core sdk ${{parameters.dotNetCoreVersion}}'
    inputs:
      version: ${{parameters.dotNetCoreVersion}}

As you can see I’m simply writing a standard jobs section, that starts with the job Build_and_test that will be run on Default Pool. The jobs starts with a DotNetCoreInstaller steps where you can see that to reference a parameter you need to use special syntax ${{parameters.parametername}}. The beautiful aspect of templates is that they are absolutely like a standard pipeline definition, just use ${{}} syntax to reference parameters.

Job Build_and_test prosecute with standard build test tasks and it determines (with gitversion) semantic version for the package. Since this value will be use in other jobs, I need to made it available with a specific PowerShell task.

  - powershell: echo "##vso[task.setvariable variable=NugetVersion;isOutput=true]$(NugetVersion)"
    name: 'SetNugetVersion'

This task simply set variable $(NugetVersion) as variable NugetVersion but with isOutput=true to made it available to other jobs in the pipeline. Now I can define the other job of the template to pack and publish nuget package.

- job: 'Pack_Nuget'
  dependsOn: 'Build_and_Test'

  pool:
    name: Default

  variables:
    NugetVersion: $[ dependencies.Build_and_Test.outputs['SetNugetVersion.NugetVersion'] ]

The only difference from previous job is the declaration of variable NugetVersion with a special syntax that allows to reference it from a previous job. Now I simply trigger the build from the original project and everything run just fine.

image

Figure 1: Standard build for library project, where I use the whole definition in a template file.

As you can see, thanks to Templates, the real pipeline definition for my project is 23 lines long and I can simply copy and paste to every utility repository, change 4 lines of codes (template parameters) and everything runs just fine.

Using templates lower the barrier for Continuous integration, every member of the team can start a new Utility Project and just setup a standard pipeline even if he/she is not so expert.

Using templates brings a lots of advantage in the team, to add to standard advantages of using plain YAML syntax.

First: you can create standards for all pipeline definitions, instead of having a different pipeline structure for each project, templates allows you to define a set of standard pipeline and reuse for multiple projects.

Second: you have automatic updates: thanks to the ability to reference templates from other repository it is possible to just update the template and have all the pipelines that reference that template to automatically use new version (reference a branch). You keep the ability to pin a specific version to use if needed (reference a tag or a specific commit).

Third: you lower the barrier for creating pipelines for all team members that does not have a good knowledge of Azure Pipelines, they can simply copy the build, change parameters and they are ready to go.

If you still have pipelines defined with graphical editor, it is the time to start upgrading to YAML syntax right now.

Happy Azure Devops.

Install latest node version in Azure Pipelines

I have a build in Azure DevOps that suddenly starts failing on some agents during the build of an angular application. Looking at the log I found that error

You are running version v8.9.4 of Node.js, which is not supported by Angular CLI 8.0+.

Ok, the error is really clear, some developer upgraded Angular version on the project and node version installed in some of the build servers is old. Now the obvious situation is logging in ALL build servers, upgrade node js installation and the build should run on every agent. This is not a perfect solution, especially because because someone can add another build server with an outdated Node Js version and I’m stuck again.

Having some strong pre-requisite on build agent, like a specific version of NodeJs is annoying and needs to be addressed as soon as possible.

In this scenario you have two distinct way to solve the problem. The first solution is adding a custom capabilities to all the agents that have Node 10.9 at least, or even better, mark all agents capable to build Angular 8 with a Angular8 capability. Now you can mark all builds for projects with Angular 8 to demand Angular8 capability, and lets Azure DevOps choose the right agent for you.

Matching Agent Capabilities with Build Demands is a standard way to have your build definition match the correct agent that have all prerequisites preinstalled

But even better is using a Task (still in preview) that is able to install and configure required node version on the agent automatically.

image

Figure 1: Use Node.js ecosystem Task configured

Thanks to this task, you do not need to worry about choosing the right agent, because the task will install required NodeJs version for you automatically. This solution is perfect, because it lessen pre-requisites of build agent, it is the build that will automatically install all prerequisites if missing.

When you start having lots of builds and build agents, it is better not to have too many prerequisites for a build agent, because it will complicate deploy of new agents and can create intermittent failure.

If you can you should be able to preinstall all required prerequisites for the build with build definition, avoiding to require prerequisites on agents.

Happy Azure Devops Pipeline

Another gem of Azure Devops, multistage pipelines

With deployment of Sprint 151 we have an exciting news for Azure DevOps called multi stage pipelines. If you read my blog you should already know that I’m a huge fan of having YAML build definition, but until now, for the release part, you still had to have the standard graphical editor. Thanks to Multi Stage Pipelines now you can have both build and release definition directly in a single YAML file.

Multi stage pipelines will be the unified way to create a yaml file that contains both build and release definition for your projects.

This functionality is still in preview and you can have a good starting point here, basically we still miss some key features, but you can read in previous post about what’s next for them, and this should reassure you that this is an area where Microsoft is investing a lot.

Let’s start to create first real pipeline to deploy an asp.net application based on IIS, first of all I’m starting with an existing YAML build, I just create another yaml file, then I can copy all the existing YAML of an existing build, but in the head of the file I’m using a slightly different syntax

image

Figure 1: New Multistage pipeline definition

As you can see the pipeline starts with name stages, then a stage section starts, that basically contains a standard build, in fact I have one single job in Build Stage, a job called Build_and_package that takes care of building testing and finally publish artifacts.

After the pipeline is launched, here is what I have as result (Figure 2):

image

Figure 2: Result of a multistage pipeline

As you can see the result is really different from a normal pipeline, first of all I have all the stages (actually my deploy job is fake and does nothing). As you can see the pipeline is now composed by Stages, where each stage contains jobs, and each jobs is a series of tasks. Clicking on Jobs section you can see the outcome of each jobs, this allows me to have a quick look at what really happened.

image

Figure 3: Job results as a full list of all jobs for each stage.

When it is time to deploy, we target environments, but unfortunately in this early preview we can only add kubernetes namespace to an environment, but we are expecting soon to be able to add Virtual Machines through deployment groups and clearly azure web apps and other azure resources.

I strongly encourage you to start familiarizing with the new syntax, so you will be able to take advantage of this new feature as soon at it will be ready.

Gian Maria

Converting Existing pipeline to YAML, how to avoid double builds

Actually YAML build is the preferred way to create Azure DevOps Build Pipeline and converting existing build is really simple thanks to the “View YAML” button that can simply convert every existing pipeline in a YAML definition.

image

figure 1: Converting existing Pipeline in YAML is easy with the View YAML button present in editor page.

The usual process is, start a new feature branch to test pipeline conversion to YAML, create the YAML file and a Pipeline based on it, then start testing. Now a problem arise: until the YAML definition is not merged in ANY branch of your Git repository, you should keep the old UI Based Build and the new YAML build togheter.

What happens if a customer calls you because it has a bug in an old version, you create a support branch and then realize that in that branch the YAML build definition is not present. What if the actual YAML script is not valid for that code? The obvious solution is to keep the old build around until you are 100% sure that the build is not needed anymore.

During conversion from legacy build to YAML it is wise to keep the old build around for a while.

This usually means that you start to gradually remove triggers for branches until you merge all the way to master or the last branch, then you leave the definition around without trigger for a little while, finally you delete it.

The real problem is that usually there is a transition phase where you want both the old pipeline definition to run in parallel with the YAML one, but this will create a trigger for both the build at each publish.

SNAGHTML64e597

Figure 2: After a push both build, the old UI Based and the new based on YAML were triggered.

From figure 2 you can understand the problem: each time I push, I have two build that were spinned. Clearly you can start setting up triggers for build to handle this situation, but it is usually tedious.

The very best situation would be, trigger the right build based on the fact that the YAML definition file is present or not.

A viable solution is: abort the standard build if the corresponding YAML Build file is present in the source. This will perfectly work until YAML build file reach the last active branch, after that moment you can disable the trigger on the original Task based build or completely delete the build because all the relevant branches have now the new YAML definition.

To accomplish this, you can simple add a PowerShell Task in the original build, with a script that checks if the YAML file exists and if the test is positive aborts current build. Luckly enough I’ve found a script ready to use: Many tanks for the original author of the script. You can find the original script in GitHub and you can simply take the relevant part putting inside a standard PowerShell task.

SNAGHTMLb575bc

Figure 3: Powershell inline task to simply abort the build.

The script is supposed to work if you have a variable called Token where you place a Personal Access Token with sufficient permission to cancel the build, as explained in the original project on GitHub.

Here is my version of the script

if (Test-Path Assets/Builds/BaseCi.yaml) 
{
    Write-Host "Abort the build because corresponding YAML build file is present"
    $url = "$($env:SYSTEM_TEAMFOUNDATIONCOLLECTIONURI)$env:SYSTEM_TEAMPROJECTID/_apis/build/builds/$($env:BUILD_BUILDID)?api-version=2.0"
    $pat = "$(token)" 
    $pair = ":${pat}"
    $b  = [System.Text.Encoding]::UTF8.GetBytes($pair)
    $token = [System.Convert]::ToBase64String($b)
    $body = @{ 'status'='Cancelling' } | ConvertTo-Json
    
    $pipeline = Invoke-RestMethod -Uri $url -Method Patch -Body $body -Headers @{
        'Authorization' = "Basic $token";
        'Content-Type' = "application/json"
    }
    Write-Host "Pipeline = $($pipeline)"
}
else
{
   write-host "YAML Build is not present, we can continue"
}

This is everything you need, after this script is added to the original build definition, you can try to queue a build for a branch that has the YAML build definition on it and wait for the execution to be automatically canceled, as you can see in Figure 4:

image

Figure 4: Build cancelled because YAML definition file is present.

With this workaround we still have double builds triggered, but at least when the branch contains the YAML file, the original build definition will imeediately cancel after check-out, because it knows that a corresponding YAML build was triggered. If the YAML file is not present, the build runs just fine.

This is especially useful because it avoid human error, say that a developer manually trigger the old build to create a release or to verify something, if he trigger the old build on a branch that has the new YAML definition, the build will be automatically aborted, so the developer can trigger the right definition.

Gian Maria.

Error publishing .NET core app in Azure Devops YAML Build

Short story, I’ve created a simple YAML build for a .NET core project where one of the task will publish a simple .NET core console application. After running the build I’ve a strange error in the output

No web project was found in the repository. Web projects are identified by presence of either a web.config file or wwwroot folder in the directory.

This is extremely strange, because the project is not a web project, it is a standard console application written for .NET core 2.2 so I do not understand why it is searching a web.config file.

Then I decided to create a standard non YAML build, and when I dropped the task on the build I immediately understood the problem. This happens because dotnet core task with command publish is assuming by default that a web application is going to be published.

image

Figure 1: Default value for the dotnet publish command is to publish Web Project

Since I’ve no Web Project to publish I immediately change  my YAML definition to explicitly set to False the publishWebProjects property.

  - task: DotNetCoreCLI@2
    displayName: .NET Core Publish
    inputs:
      command: publish
      projects: '$(serviceProject)'
      arguments: '--output $(Build.ArtifactStagingDirectory)'
      configuration: $(BuildConfiguration)
      workingDirectory: $(serviceProjectDir)
      publishWebProjects: False
      zipAfterPublish: true

And the build was fixed.

Gian Maria.