Consume Azure DevOps feed in TeamCity

Azure DevOps has an integrated feed management you can use for nuget, npm, etc; the feed is private and only authorized users can download / upload packages. Today I had a little problem setting up a build in Team City that uses a feed in Azure Devops, because it failed with 201 (unauthorized)

The problem with Azure DevOps NuGet feeds, is how to authenticate other toolchain or build server.

This project still have some old build in TeamCity, but when it starts consuming packages published in Azure Devops, TeamCity builds start failing due 401 (unauthorized) error. The question is: How can I consume an Azure DevOps nuget feed from agent or tools that are not related to Azure Devops site itself?

I must admit that this information is scattered in various resources and Azure DevOps, simply tells you to add a nuget.config in your project and you are ready to go, but this is true only if we are using Visual Studio connected to the account.

image

Figure 1: Standard configuration for nuget config to point to the new feed

Basic documentation forget to mention authentication, except from Get The Tool instruction, where you are suggested to download the Credential Provider if you do not have Visual Studio.

image

Figure 2: Instruction to get NuGet and Credential Provider

Credential provider is useful, but is really not the solution, because I want a more Nuget Friendly solution, the goal is: when the agent in TeamCity is issuing a Nuget restore, it should just work.

A better alternative is to use standard NuGet authentication mechanism, where you simply add a source with both user and password. Lets start from the basic, if I use NuGet command line to list packages for my private source I got prompted for a user.

image

Figure 3: NuGet asking for credential to access a feed

Now, as for everything that involves Azure DevOps, when you are asked for credential, you can use anything for the username and provide an Access Token as a password.

image

Figure 4: Specifying anything for user and my accesstoken I can read the feed.

This the easiest and more secure way to login in Azure DevOps with command line tools, especially because to access the feed I’ve generated a token that have a really reduced permission.

SNAGHTML2160fc9

Figure 5: Access token with only packaging read permission

As you can see in Figure 5, I created a token that has only read access to packaging, if someone stoles it, he/she can only access packages and nothing more.

Now the question is: how can I made TeamCity agent to use that token? Actually TeamCity has some special section where you can specify username and password, or where you can add external feed, but I want a general way to solve for every external tool, not only for Team City. It is time to understand how nuget configuration is done.

Nuget can be configured with nuget.config files placed in some special directories, to specify configuration for current user or for the entire computer

Standard Microsoft documentation states we can have three levels of nuget.config for a solution, one file located in the same directory as solution file, then another one with user scope and is located at %appdata%/Nuget/nuget.config, finally we can have settings for all operation for entire computer in %programfiles(x86)%/nuget/config/nuget.config.

You need to know also, from the standard nuget.config reference documentation, that you can add credentials directly into nuget.config file, because you can specify username and password for each package source. Now I can place, for each computer with an active TeamCity agent, a nuget.config at computer level to specify credential for my private package and the game is done. I created just a file and uploaded in all Team City computers with agents.

The real problem with this approach is that the token is included in clear form in config.file located in c:\program files (x86)/nuget/config.

At this point you can hit this issue https://github.com/NuGet/Home/issues/3245 if you include clear text password in nuget.config password field, you will get a strange “The parameter is incorrect” error, because clear text password should be specified in ClearTextPassword parameter. After all, who write a clear text password in an file?

This leads to the final and most secure solution, in your computer, download latest nuget version in a folder, then issue this command

 .\nuget.exe sources add -name proximo
  -source https://pkgs.dev.azure.com/xxxxx/_packaging/yyyyyy/nuget/v3/index.json 
  -username anything 
  -password **myaccesstoken**

This command will add, for current user, a feed named proximo, that points to the correct source with username and password. After you added the source, you can simply go to %appdata%/nuget/ folder and open the nuget.config file.

image

Figure 6: Encrypted credentials stored inside nuget.config file.

As you can see, in your user configuration nuget.exe stored an encrypted version of your token, if you issue again a nuget list –source xxxxx you can verify that nuget is able to automatically log without any problem because it is using credentials in config file.

The real problem of this approach is that credentials are encrypted only for the machine and the user that issued the command, you cannot reuse in different machine or in the same machine with a different user.

Nuget password encryption cannot be shared between different users or different computers, making it works only for current user in current computer.

image

Figure 7: Include clear text password in machine nuget.config to made every user being able to access that specific feed

To conclude you have two options: you can generate a token and include in clear text in nuget.config and copy it to all of your TeamCity build servers. If you do not like clear text tokens in files, have the agent runs under a specific build user , login in agent machine with that specific build user  and issue nuged add to have token being encrypted for that user.

In both cases you need to renew the configuration each year (token last for at maximum one year). (You can automate this process with a special build that calls nuget sources add).

Gian Maria.

Quick Peek at Microsoft Security Code Analysis: Credential Scanner

Microsoft Security Code Analysis contains a set of Tasks for Azure DevOps pipeline to automate some security checks during building of your software. Automatic security scanning tools are not a substitute in any way for human security analysis, remember: if you develop code ignoring security, no tool can save you.

Despite this fact, there are situation where static analysis can really give you benefit, because it can avoid you some simple and silly errors, that can lead to troubles. All Tasks in Microsoft Security Code Analysis package are designed to solve a particular problem and to prevent some common mistake.

Remember that security cannot be enforced only with automated tools; nevertheless they are useful to avoid some common mistakes and are not meant to replace security audit of your code.

The first task I suggest you to look at is Credential Scanner, a simple task that searches source code for potential credentials inside files.

image

Figure 1: Credential scanner task

Modern projects, especially those designed for the cloud, use tons of sensitive data that can be mistakenly stored in source code. The easiest mistake is storing credential for databases or other services inside configuration file, like web.config for ASP.Net projects or we can left some Token for Cloud resource or services, leaving that resource unprotected.

Including Credential Scanner in your azure pipeline can save you troubles, with minimal configuration you can have it scan your source code to find credentials. All you need to do is drop the task in the pipeline, use default configuration and you are ready to go. Full details on configuring the task could be found here.

image

Figure 2: Configuration pane for Credential Scanner

Credential scan will run in your pipeline and report problem found.

image

Figure 3: Credential scanner found a match.

If you look in Figure 3: Credential scanner found a match, but the task does not make the build fails (as you could expect). This is normal behvior, because all security tasks are meant to produce an output file with scan result, and it is duty of another dedicated task to analyze all results file and make the build fail if problems are found.

It is normal to have security related tasks not to fail the build immediately, a dedicated tasks is needed to analyze ALL log files and fail the build if needed

Post Analysis task is your friends here.

image

Figure 4: Add a Post Analysis task to have the build fails if some of the security related task failed

Actually this special task allows you to specify which of the security task you want to analyze and this is the reason why the build does not fails immediately when Credential Scanner found a problem. The goal here is running ALL security related tasks, then analyze all of them and have the build fails if problems where found.

image

Figure 5: Choose which analyzer you want to use to make the build fail.

After you added this task at the end of the build, your build fails if security problems are found.

image

Figure 6: Build fails because some of the analysis found some problems. In this specific situation we have credentials in code.

As you can see from Figure 6 Credential Scan task is green and is the Security Post Analysis Task that made the build fails. It also log some information in build errors page as you can see from Figure 7.

image

Figure 7: Build fails for issues in credential scanner

Now the final question is: where can I found the csv file generated by the tool? The answer is simple, there is another special task whose purpose is upload all logs as artifacts of the build.

image

Figure 8: Simply use the PublishSecurityAnalysisLog task to have all security related logs published as artifacts.

As you can see from Figure 9 all the logs are correctly uploaded as artifacts and divided by tool type.  In this example I’ve ran only the Credential Scanner Tool so it is the only output I have in my artifacts folder.

image

Figure 9: Credential Scanner output included as artifact build.

Downloading the file you can open it with excel (I usually use csv file output for Credential Scanner) and find what’s wrong.

image

Figure 10: Csv output contains the file with the error, the number of the line but everything else is redacted out for security

As I can verify from csv output, I’ve some problem at line 9 of config.json file, time to look at the code and find the problem.

SNAGHTML7881da

Figure 11: Password included in a config file.

In CSV output file, Credential Scanner task only store file, row number and hash of the credential found, this is needed to avoid the credential leak from build output.

Now, this example was made for this post, so do not try that password against me, it will just not work :). If you think that you never fall for this silly mistake remember that noone is perfect. Even if I’m trying to avoid these kind of errors, I must admit that some years ago I was contacted by a nice guy that told me that I’ve left a valid token in one of my sample source. Shame on me, but this kind of errors could happen. Thanks to Credential Scanner you can really mitigate them.

If you wonder what kind of rules the task uses to identify password, the documentation states that

CredScan relies on a set of content searchers commonly defined in the buildsearchers.xml file. The file contains an array of XML serialized objects that represent a ContentSearcher object. The program is distributed with a set of searchers that have been well tested but it does allow you to implement your own custom searchers too.

So you can download the task, and examine the dll, but the nice aspect is that you can include your own searcher too.

If the tool find false positive and you are really sure that the match is really a false positive, you can use an exclude file as for the documentation.

image

Figure 12: Suppression rules for the task.

I must admit that Credential Scanner is really a powerful tool that should be included in every build, especially if you are developing open source code. Remember that there are lots of tools made to scavenge projects for this kind of vulnerabilities in code, so, if you publish some sensitive password or keys in open source project, it constitutes a big problem. Sooner or later this will bite you.

Gian Maria

Multiline PowerShell on YAML pipeline

Sometimes having a few lines of PowerShell in your pipeline is the only thing you need to quickly customize a build without using a custom task or having a PowerShell file in source code. One of the typical situation is: write a file with some content that needs to be determined by a PowerShell script, in my situation I need to create a configuration file based on some build variable.

Since using standard graphical editor to put a PowerShell task and then grab the YAML with the “View YAML” button is the quickest way to do this, you need to be warned because you can incur in the following error.

can not read a block mapping entry; a multiline key may not be an implicit key

This error happens when you put multiline text inside a YAML file with bad indentation of a multiline string. Inline PowerShell task comes really in hand but you need to do special attention because “View YAML” button in the UI sometimes generates bad YAML.

In Figure 1 You can verify what happens when I copy and paste a YAML task using the “View YAML” button of standard graphical editor and paste into a YAML build. In this situation the editor immediately shows me that the syntax is wrong. The real problem here is that, using Visual Studio Code with Azure Pipelines extension did not catch the error, and you have a failing build.

image

Figure 1: Wrong YAML syntax due to multiline PowerShell command line

It turns out that the View YAML button of classic graphical editor misses an extra tab needed to the content of PowerShell, the above task should be fixed in this way:

image

Figure 2: Correct syntax to include a multiline script

If you want to include an inline PowerShell script,  most of the time you do not want to limit yourself to a single line and you need to use the multiline string syntax. Just use a pipe character (|) followed by a multiline string where each newline will be replaced by regular \n. The important rule is: the string has an extra tab respect the line that initiate the multiline string. This fact was highlighted in Figure 2. The tab is important because YAML parser will consider the string finished when it encounter a new line with a one tab less than the multiline string.

The pipe symbol at the end of a line indicates that any indented text that follows is a single multiline string. See the YAML spec – Literal styles.

This is another reason to use online editor for YAML build, because, as you can see in Figure 1, it is able to immediately spot errors in syntax.

Gian Maria

Release app with Azure DevOps Multi Stage Pipeline

MultiStage pipelines are still in preview on Azure DevOps, but it is time to experiment with real build-release pipeline, to taste the news. The Biggest limit at this moment is that you can use Multi Stage to deploy in Kubernetes or in the cloud, but there is not support for agent in VM (like standard release engine). This support will be added in the upcoming months but if you use azure or kubernetes as a target you can already use it.

My sample solution is in GitHub, it contains a real basic Asp.NET core project that contains some basic REST API and a really simple angular application. On of the advantage of having everything in the repository is that you can simply fork my repository and make experiment.

Thanks to Multi Stage Pipeline we finally can have build-test-release process directly expressed in source code.

First of all you need to enable MultiStage Pipeline for your account in the Preview Features, clicking on your user icon in the upper right part of the page.

image

Figure 1: Enable MultiStage Pipeline with the Preview Features option for your user

Once MultiStage Pipeline is enables, all I need to do is to create a nice release file to deploy my app in azure. The complete file is here https://github.com/alkampfergit/AzureDevopsReleaseSamples/blob/develop/CoreBasicSample/builds/build-and-package.yaml and I will highlight the most important part here. This is the starting part.

image 

Figure 2: First part of the pipeline

One of the core differences from a standard pipeline file is the structure of jobs, after trigger and variables, instead of directly having jobs, we got a stages section, followed by a list of stages that in turns contains jobs. In this example the first stage is called build_test, it contains all the jobs to build my solution, run some tests and compile Angular application. Inside a single stage we can have more than one job and in this particular pipeline I divided the build_test phase in two sub jobs, the first is devoted to build ASP.NET core app, the other will build the Angular application.

image

Figure 3: Second job of first stage, building angular app.

This part should be familiar to everyone that is used to YAML pipeline, because it is, indeed, a standard sequences of jobs; the only difference is that we put them under a stage. The convenient aspect of having two distinct jobs, is that they can be run in parallel, reducing overall compilation time.

If you have groups of taks that are completely unrelated, it is probably bettere to divide in multiple jobs and have them running in parallel.

The second stage is much more interesting, because it contains a completely different type of job, called deployment, used to deploy my application.

image

Figure 4: Second stage, used to deploy the application

The dependsOn section is needed to specify that this stage can run only after build_test stage is finished. Then it starts jobs section that contains a single deployment job. This is a special type of job where you can specify the pool, name of an environment and then a strategy of deploy; in this example I choose the simplest, a run once strategy composed by a list of standard tasks.

If you ask yourself what is the meaning of environment parameter, I’ll cover it in much extension on a future post, for this example just ignore it, and consider it as a way to give a name to the environment you are deploying.

MultiStage pipeline introduced a new job type called deployment, used to perform deployment of your application

All child steps of deployment job are standard tasks used in standard release, the only limitation of this version is that they run on the agent, you cannot run on machine inside environment (you cannot add anything else than kubernetes cluster to an environment today).

The nice aspect is that, since this stage depends on build_test, when deployment section runs, it automatically download artifacts produced by previous stage and place them in folder $(Pipeline.Workspace) followed by another subdirectory that has the name of the artifacts itself. This solves the need to transfer artifact of the first stage (build and test) to deployment stage

image

Figure 5: Steps for deploying my site to azure.

Deploying the site is really simple, I just unzip asp.NET website to a subdirectory called FullSite, then copy all angular compiled file in www folder and finally use a standard AzureRmWebAppDeployment to deploy my site to my azure website.

Running the pipeline shows you a different user interface than a standard build, clearly showing the result for each distinct stage.

image

Figure 6: Result of a multi stage pipeline has a different User Interface

I really appreciate this nice graphical representation of how the stage are related. For this example the structure is is really simple (two sequential steps), but it shows clearly the flow of deployment and it is invaluable for most complex scenario. If you click on Jobs you will have the standard view, where all the jobs are listed in chronological order, with the Stage column that allows you to identify in which stage the job was run.

image

Figure 7: Result of the multi stage pipeline in jobs view

All the rest of the pipeline is pretty much the same of a standard pipeline, the only notable difference is that you need to use the stage view to download artifacts, because each stage has its own artifacts.

image

Figure 8: Downloading artifacts is possible only in stages view, because each stage has its own artifacs.

Another nice aspect is that you can simply rerun each stage, useful is some special situation (like when your site is corrupted and you want to redeploy without rebuilding everything)

Now I only need to check if my sites was deployed correctly and … voilà everything worked as expected, my site is up and running.

image

Figure 9: Interface of my really simple sample app

Even if MultiStage pipeline is still in preview, if you need to deploy to azure or kubernetes it can be used without problem, the real limitation of actual implementation is the inability to deploy with agents inside VM, a real must have if you have on-premise environment.

On the next post I’ll deal a little more with Environments.

Gian Maria.

Azure DevOps gems, YAML Pipeline and Templates

If you read my blog you already know that I’m a great fan of YAML Pipeline instead of using Graphic editor in the Web UI, there are lots of reasons why you should use YAML; one for all the ability to branch Pipeline definition with code, but there is another really important feature: templates.

There is a really detailed documentation on MSDN on how to use this feature, but I want to give you a complete walkthrough on how to start to effectively use templates. Thanks to templates you can create a standard build definition with steps or jobs and steps in a template file, then reference that file from real build, just adding parameters.

The ability to capture a sequence of steps in a common template file and reuse it over and over again in real pipeline is probably one of the top reason for moving to YAML template.

One of the most common scenario for me is: account with lots of utilities projects (multitargeted for full framework and dotnetstandard), each one with its git repository and the need for a standard CI definition to:

1) Build the solution
2) Run tests
3) Pack a Nuget Package with semantic versioning
4) Publish Nuget Package inside an Azure DevOps private package repository

If you work on big project you usually have lots of these small projects: Utilities for Castle, Serilog, Security, General etc. In this scenario it is really annoying to define a pipeline for each project with Graphical editor, so it is pretty natural moving to YAML. You can start from a standard file, copy it in the repository and then adapt for the specific project, but when a task is updated, you need to re-update all the project to update all the reference. With this approach the main problem is: after some time the builds are not anymore in sync and each project start to behave differently.

I start defining my template once, in a dedicated repository, then I can reuse it in any project. When the template changes, I want to be able to manually update all pipelines to reference the new version or, even better, decide which project will be updated automatically.

Lets start with the real build file, that is included in the real repository and lets check how to reference a template stored in another repository. The only limit is that the repository should be in the same organization or in GitHub. Here is full content of the file.

trigger:
- master
- develop
- release/*
- hotfix/*
- feature/*

resources:
  repositories:
    - repository: templatesRepository
      type: git
      name: Jarvis/BuildScripts
      ref: refs/heads/hotfix/0.1.1

jobs:

- template: 'NetStandardTestAndNuget.yaml@templatesRepository'
  
  parameters:
    buildName: 'JarvisAuthCi'
    solution: 'src/Jarvis.Auth.sln'
    nugetProject: 'src/Jarvis.Auth.Client/Jarvis.Auth.Client.csproj'
    nugetProjectDir: 'src/Jarvis.Auth.Client'

The file is really simple, it starts with the triggers (as for a standard YAML build), then it comes a resources section, that allows you to references objects that lives outside the pipeline; in this specific example I’m declaring that this pipeline is using a resource called templateRepository, an external git repository (in the same organization)  called BuildScripts and contained in Team Project called Jarvis; finally the ref property allows me to choose the branch or the tag to use with standard refs git syntax (refs/heads/master, refs/heads/develop, refs/tags/xxxx, etc). In this specific example I’m freezing the version of the build script to the tag 0.1.0, if the repository will be upgraded this build will always reference version 0.1.0. This imply that, if I change BuildScripts repository, I need to manually update this build to reference newer version. If I want this definition to automatically use new versions  I can simply reference master or develop branch.

The real advantage to have the template versioned in another repository is that it can use GitFlow so, every pipeline that uses the template, can choose to use specific version, or latest stable or even latest development.

Finally I start to define Jobs, but instead of defining them inside this YAML file I’m declaring that this pipeline will use a template called NetStandardTestAndNuget.yaml contained in the resource templatesRepository. Following template reference I specify all the parameters needed by the template to run. In this specific example I have four parameters:

buildName: The name of the build, I use a custom Task based on gitversion that will rename each build using this parameter followed by semversion.
solution: Path of solution file to build
nugetProject: Path of the csproject that contains the package to be published
nugetProjectDir: Directory of csproject to publish

The last parameter could be determined by the third, but I want to keep YAML simple, so I require the user of the template to explicitly pass directory of the project that will be used as workingDirectory parameter for dotnet pack command.

Now the real fun starts, lets examine template file contained in the other repository. Usually a template file starts with a parameters section where it declares the parameters it is expecting.

parameters:
  buildName: 'Specify name'
  solution: ''
  buildPlatform: 'ANY CPU'
  buildConfiguration: 'Release'
  nugetProject: ''
  nugetProjectDir: ''
  dotNetCoreVersion: '2.2.301'

As you can see the syntax is really simple, just specify name of the parameter followed by the default value. In this example I really need four parameters, described in the previous part.

Following parameters section a template file can specify steps or event entire jobs, in this example I want to define two distinct jobs, one for build and run test and the other for nuget packaging and publishing

jobs:

- job: 'Build_and_Test'
  pool:
    name: Default

  steps:
  - task: DotNetCoreInstaller@0
    displayName: 'Use .NET Core sdk ${{parameters.dotNetCoreVersion}}'
    inputs:
      version: ${{parameters.dotNetCoreVersion}}

As you can see I’m simply writing a standard jobs section, that starts with the job Build_and_test that will be run on Default Pool. The jobs starts with a DotNetCoreInstaller steps where you can see that to reference a parameter you need to use special syntax ${{parameters.parametername}}. The beautiful aspect of templates is that they are absolutely like a standard pipeline definition, just use ${{}} syntax to reference parameters.

Job Build_and_test prosecute with standard build test tasks and it determines (with gitversion) semantic version for the package. Since this value will be use in other jobs, I need to made it available with a specific PowerShell task.

  - powershell: echo "##vso[task.setvariable variable=NugetVersion;isOutput=true]$(NugetVersion)"
    name: 'SetNugetVersion'

This task simply set variable $(NugetVersion) as variable NugetVersion but with isOutput=true to made it available to other jobs in the pipeline. Now I can define the other job of the template to pack and publish nuget package.

- job: 'Pack_Nuget'
  dependsOn: 'Build_and_Test'

  pool:
    name: Default

  variables:
    NugetVersion: $[ dependencies.Build_and_Test.outputs['SetNugetVersion.NugetVersion'] ]

The only difference from previous job is the declaration of variable NugetVersion with a special syntax that allows to reference it from a previous job. Now I simply trigger the build from the original project and everything run just fine.

image

Figure 1: Standard build for library project, where I use the whole definition in a template file.

As you can see, thanks to Templates, the real pipeline definition for my project is 23 lines long and I can simply copy and paste to every utility repository, change 4 lines of codes (template parameters) and everything runs just fine.

Using templates lower the barrier for Continuous integration, every member of the team can start a new Utility Project and just setup a standard pipeline even if he/she is not so expert.

Using templates brings a lots of advantage in the team, to add to standard advantages of using plain YAML syntax.

First: you can create standards for all pipeline definitions, instead of having a different pipeline structure for each project, templates allows you to define a set of standard pipeline and reuse for multiple projects.

Second: you have automatic updates: thanks to the ability to reference templates from other repository it is possible to just update the template and have all the pipelines that reference that template to automatically use new version (reference a branch). You keep the ability to pin a specific version to use if needed (reference a tag or a specific commit).

Third: you lower the barrier for creating pipelines for all team members that does not have a good knowledge of Azure Pipelines, they can simply copy the build, change parameters and they are ready to go.

If you still have pipelines defined with graphical editor, it is the time to start upgrading to YAML syntax right now.

Happy Azure Devops.