Hosted Agents plus Docker, perfect match for Azure DevOps and Open source Project

If you want to build an OpenSource project with Azure DevOps, you can open a free account and you have 10 concurrent pipelines with free agents to build your project, yes, completely free. The only problem you have in this scenario is that, sometimes, you need some prerequisites installed on the build machine, like MongoDb and they are missing on hosted build.

Lets take as use case NStore, an open source library for Event Sourcing in C# that needs to run unit test against MongoDb and SqlServer, prerequisites that are not present in Linux Hosted Agents. Before giving up using Hosted Agents and start deploying private agents, you need to know that Docker is up and running in Hosted Agents and it can be used to have your missing prerequisites.

Thanks to docker you can simply have prerequisites for your build to run in an Hosted Environment

Having docker preinstalled on Hosted Buil Agent gives you a tremendous power, combined with Docker Task. If I want to run a build on Linux Hosted agent of NStore, here is a possible build that runs perfectly fine.

image

Figure 1: Simple build definition that starts a MongoDb and Sql Server instance with Docker before actually running th ebuild.

If you examine the very first task it is amazing how simple it is to start a MsSql instance running on your Linux box. At the end of Task execution you have a fully functional container running in Hosted Agent.

image

Figure 2: Running MsSql as a container in Linux

You just need to remember to redirect the port (-p 1433:1433) so that you can access SqlServer instance and the game is done.

Task number 2 uses the very same technique to run a MongoDB instance inside another docker container instance then Task 3 is a simple Docker Ps command, just to verify that the two container are running correctly. As you can see from Figure 3, it is quite useful to know if the container really started correctly.

image

Figure 3: Ps command allows for simple dump of all containers running in the machine

You can log every container output, in Task number 4 I’m just running a logs command for the MsSql container, just to verify, in case MsSql test are all failing, why the container did not started (like you forgot the ACCEPT_EULA, or you choose a password not enough complex.

image

Figure 4: Logging output of container to troubleshoot them.

Remember that if the container does not start correctly your build will have tons of failing tests, so you really need a way to quick understand if tests are really failing or your container instance simply did not start (and the reason why it failed)

All subsequent tasks are standard for a .NET Core project, just dotnet restore, build and test your solution, and upload test results in the build result, so you can have a nice result of all of your tests.

It is almost impossible to pretend that someone gives you a build agent with Everything you can possibly need, but if you give to the user Docker support, life is really easier.

Finally, to make everything flexible, you should grab connection strings for Tests from environment variables. NStore uses a couple of Environment Variable called NSTORE_MONGODB and NSTORE_MSSQL to specify connection strings used for test. I really want you to remember that all Variables of a build are copied to Environment Variables during the build.

image

Figure 5: Configuration of test connection strings are directly stored in Build Variables.

As you can see from Figure 5 I used a MongoDb without password (this is a instance in docker that will be destroyed after the build, so it is acceptable to run without a password) but you can usually configure Docker Instances with start parameters. In that example I gave SQL Server a strong password (it is required for the container to start).

Remember, if you have an open source project, you can build for free with Azure DevOps pipelines with minimum effort and before giving away using Hosted Agents, just verify if you can have what you miss with Docker.

Gian Maria.

Azure DevOps and SecDevOps

One of the cool aspect of Azure DevOps is the extendibility through marketplace api, and for security you can find a nice marketplace addin called Owasp ZAP (https://marketplace.visualstudio.com/items?itemName=kasunkodagoda.owasp-zap-scan) that can be used to automate OWASP test for web application.

You can also check this nice article in MSDN https://devblogs.microsoft.com/premier-developer/azure-devops-pipelines-leveraging-owasp-zap-in-the-release-pipeline/ that explain how you can leverage OWASP ZAP analysis during a deploy with release pipeline.

REally good stuff to read / use.

Another gem of Azure Devops, multistage pipelines

With deployment of Sprint 151 we have an exciting news for Azure DevOps called multi stage pipelines. If you read my blog you should already know that I’m a huge fan of having YAML build definition, but until now, for the release part, you still had to have the standard graphical editor. Thanks to Multi Stage Pipelines now you can have both build and release definition directly in a single YAML file.

Multi stage pipelines will be the unified way to create a yaml file that contains both build and release definition for your projects.

This functionality is still in preview and you can have a good starting point here, basically we still miss some key features, but you can read in previous post about what’s next for them, and this should reassure you that this is an area where Microsoft is investing a lot.

Let’s start to create first real pipeline to deploy an asp.net application based on IIS, first of all I’m starting with an existing YAML build, I just create another yaml file, then I can copy all the existing YAML of an existing build, but in the head of the file I’m using a slightly different syntax

image

Figure 1: New Multistage pipeline definition

As you can see the pipeline starts with name stages, then a stage section starts, that basically contains a standard build, in fact I have one single job in Build Stage, a job called Build_and_package that takes care of building testing and finally publish artifacts.

After the pipeline is launched, here is what I have as result (Figure 2):

image

Figure 2: Result of a multistage pipeline

As you can see the result is really different from a normal pipeline, first of all I have all the stages (actually my deploy job is fake and does nothing). As you can see the pipeline is now composed by Stages, where each stage contains jobs, and each jobs is a series of tasks. Clicking on Jobs section you can see the outcome of each jobs, this allows me to have a quick look at what really happened.

image

Figure 3: Job results as a full list of all jobs for each stage.

When it is time to deploy, we target environments, but unfortunately in this early preview we can only add kubernetes namespace to an environment, but we are expecting soon to be able to add Virtual Machines through deployment groups and clearly azure web apps and other azure resources.

I strongly encourage you to start familiarizing with the new syntax, so you will be able to take advantage of this new feature as soon at it will be ready.

Gian Maria

Converting Existing pipeline to YAML, how to avoid double builds

Actually YAML build is the preferred way to create Azure DevOps Build Pipeline and converting existing build is really simple thanks to the “View YAML” button that can simply convert every existing pipeline in a YAML definition.

image

figure 1: Converting existing Pipeline in YAML is easy with the View YAML button present in editor page.

The usual process is, start a new feature branch to test pipeline conversion to YAML, create the YAML file and a Pipeline based on it, then start testing. Now a problem arise: until the YAML definition is not merged in ANY branch of your Git repository, you should keep the old UI Based Build and the new YAML build togheter.

What happens if a customer calls you because it has a bug in an old version, you create a support branch and then realize that in that branch the YAML build definition is not present. What if the actual YAML script is not valid for that code? The obvious solution is to keep the old build around until you are 100% sure that the build is not needed anymore.

During conversion from legacy build to YAML it is wise to keep the old build around for a while.

This usually means that you start to gradually remove triggers for branches until you merge all the way to master or the last branch, then you leave the definition around without trigger for a little while, finally you delete it.

The real problem is that usually there is a transition phase where you want both the old pipeline definition to run in parallel with the YAML one, but this will create a trigger for both the build at each publish.

SNAGHTML64e597

Figure 2: After a push both build, the old UI Based and the new based on YAML were triggered.

From figure 2 you can understand the problem: each time I push, I have two build that were spinned. Clearly you can start setting up triggers for build to handle this situation, but it is usually tedious.

The very best situation would be, trigger the right build based on the fact that the YAML definition file is present or not.

A viable solution is: abort the standard build if the corresponding YAML Build file is present in the source. This will perfectly work until YAML build file reach the last active branch, after that moment you can disable the trigger on the original Task based build or completely delete the build because all the relevant branches have now the new YAML definition.

To accomplish this, you can simple add a PowerShell Task in the original build, with a script that checks if the YAML file exists and if the test is positive aborts current build. Luckly enough I’ve found a script ready to use: Many tanks for the original author of the script. You can find the original script in GitHub and you can simply take the relevant part putting inside a standard PowerShell task.

SNAGHTMLb575bc

Figure 3: Powershell inline task to simply abort the build.

The script is supposed to work if you have a variable called Token where you place a Personal Access Token with sufficient permission to cancel the build, as explained in the original project on GitHub.

Here is my version of the script

if (Test-Path Assets/Builds/BaseCi.yaml) 
{
    Write-Host "Abort the build because corresponding YAML build file is present"
    $url = "$($env:SYSTEM_TEAMFOUNDATIONCOLLECTIONURI)$env:SYSTEM_TEAMPROJECTID/_apis/build/builds/$($env:BUILD_BUILDID)?api-version=2.0"
    $pat = "$(token)" 
    $pair = ":${pat}"
    $b  = [System.Text.Encoding]::UTF8.GetBytes($pair)
    $token = [System.Convert]::ToBase64String($b)
    $body = @{ 'status'='Cancelling' } | ConvertTo-Json
    
    $pipeline = Invoke-RestMethod -Uri $url -Method Patch -Body $body -Headers @{
        'Authorization' = "Basic $token";
        'Content-Type' = "application/json"
    }
    Write-Host "Pipeline = $($pipeline)"
}
else
{
   write-host "YAML Build is not present, we can continue"
}

This is everything you need, after this script is added to the original build definition, you can try to queue a build for a branch that has the YAML build definition on it and wait for the execution to be automatically canceled, as you can see in Figure 4:

image

Figure 4: Build cancelled because YAML definition file is present.

With this workaround we still have double builds triggered, but at least when the branch contains the YAML file, the original build definition will imeediately cancel after check-out, because it knows that a corresponding YAML build was triggered. If the YAML file is not present, the build runs just fine.

This is especially useful because it avoid human error, say that a developer manually trigger the old build to create a release or to verify something, if he trigger the old build on a branch that has the new YAML definition, the build will be automatically aborted, so the developer can trigger the right definition.

Gian Maria.

Error publishing .NET core app in Azure Devops YAML Build

Short story, I’ve created a simple YAML build for a .NET core project where one of the task will publish a simple .NET core console application. After running the build I’ve a strange error in the output

No web project was found in the repository. Web projects are identified by presence of either a web.config file or wwwroot folder in the directory.

This is extremely strange, because the project is not a web project, it is a standard console application written for .NET core 2.2 so I do not understand why it is searching a web.config file.

Then I decided to create a standard non YAML build, and when I dropped the task on the build I immediately understood the problem. This happens because dotnet core task with command publish is assuming by default that a web application is going to be published.

image

Figure 1: Default value for the dotnet publish command is to publish Web Project

Since I’ve no Web Project to publish I immediately change  my YAML definition to explicitly set to False the publishWebProjects property.

  - task: DotNetCoreCLI@2
    displayName: .NET Core Publish
    inputs:
      command: publish
      projects: '$(serviceProject)'
      arguments: '--output $(Build.ArtifactStagingDirectory)'
      configuration: $(BuildConfiguration)
      workingDirectory: $(serviceProjectDir)
      publishWebProjects: False
      zipAfterPublish: true

And the build was fixed.

Gian Maria.