Run SonarCloud analysis in VSTS / TFS Build

Running a SonarQube analysis for TFS or VSTS is really easy because we can use a pre-made build tasks that requires few parameters and the game is done. If you have open source project it made lot of sense to use a public account in SonarCloud, so you do not need to maintain a sonar server on-premise and you can also share your public account with the community.

For open source projects, SonarCloud is available for you with zero effort and thanks to VSTS and TFS you can automate the analysis with few steps.

The first step is creating an organization in Sonar Cloud, if you prefer you can just login with your GitHub account and everything is ready. After the creation of the organization, you should create new project and generate a key to send analysis to SonarCloud server, everything is made with a simple wizard and it takes only a bunch of seconds to have your project created and ready to be used.

Once you have your project key and token  you need to add the endpoint of SonarCloud to the list of available endpoints. You only need to give the connection a name, specify as Server Url and add the token generated during project creation.


Figure 1: Configuration of sonar cloud endpoint

Now you can configure build to perform the analysis, the first task is the “prepare analysis Task”, as you can see in Figure 2. You should select the Endpoint created in previous step, fill the project key and project name, but you need also to specify a couple of properties in the advanced section . The first one is sonar.organization and it is required or the analysis will fail. This is the only difference from on-premise SonarQube server, where you do not need to add organization name.


Figure 2: Configuration of the prepare analysis task.

The other setting to be specified in Additional Properties is the, to perform branch based analysis, a feature that is available in sonarcloud and it is available on-premise only with enterprise version. You can simply use the $(Build.SourceBranchName) to use the current branch if you are using Git.


Figure 3: Analysis performed on NStore project with branch analysis enabled.

The cool part of the process is that, SonarCloud require zero installation time and less than one minute to create the first project and thanks to the VSTS / TFS build engine you can automate the analysis in less than 2 minutes.

Gian Maria.

VSTS / TFS: use Wildcards for continuous integration in Git

If you setup a build in VSTS / TFS against a git repository, you can choose to trigger the build when some specific branch changed. You can press plus button and a nice combobox appears to select the branch you want to monitor.


Figure 1: Adding a branch as trigger in VSTS / TFS Build

This means that if you add feature/1312_languageSelector, each time a new commit will be pushed on that branch, a new build will trigger. Actually if you use GitFlow you are interested in building each feature branch, so you do not want to add every feature branch in the list of monitored branches.

If you look closely, it turns out that the combo you use to choose branches admit wildcards. If you look at Figure 2, you can verify that, pressing the add button, a new line is added and you have a textbox that allow you to “Filter my branches”.


Figure 2: Filter branch text

Actually the name is misleading, because you can simply write feature/* and press enter, to instruct VSTS to monitor each branch that starts with feature/.

Using wildcards in trigger list allows you to trigger a build for every feature branch.

For people not used to VSTS, this feature goes often unnoticed, because they think that the textbox can be used only to filter one branch. With this technique you can trigger a build for each change in each standard branch in GitFlow process.


Figure 3: Trigger with wildcards

As you can see in Figure 3, I instructed TFS / VSTS to build develop, master and all hotfix, feature or release branches.

Gian Maria

New cool feature of VSTS to limit impact of erratic tests

I’ve blogged some time ago about running UAT testing with a mix of Build + Release in VSTS. Actually, UAT testing are often hard to write, because they can be erratic. As an example, we have a software composed by 5 services that collaborates together, CQRS and Event Sourcing, so most of the tests are based on a typical pattern: Do something then wait for something to happen.

Writing tests that interact with the UI or are based on several services interacting togheter can be difficult.

The problem is: whenever you are “wait for something to happen” you are evoking dragons. Actually we wrote a lot of extensions that helps writing tests, such as WaitForReadModel, that poll the readmodel waiting for the final result of a command, but sometimes those kind of tests are not stable because during a full run, some tests are slower or simply they fails because some other tests changed some precondition.

We spent time to make test stable, but some tests tend to have erratic failure and you cannot spent tons of time to avoid this. When they fails in VS or Nunit console, you can simply re-run again only failed tests and usually they pass without a single error. This is perfectly fine when you are running everything manually, you launch tests in a VM, then come back after a little bit, if some tests failed, just re-run failed test and verify if something was really broken.

Erratic tests are really difficult to handle in a build, because if the build was always red because of them, the team tend to ignore build output.

When a build is often failing due to false-positive, team tend to ignore that build and it starts losing value.

Thanks to a cool new feature in VSTS, we can instruct our builds to re-run automatically those kind of tests. You can read everything about it in the release notes for VSTS.


Figure 1: Option to re-run failed tests

It could seem a little feature, especially because usually your tests should not be erratic, but actually it is a HUGE feature for UAT testing, when having not erratic test is simply not feasible or a really huge work. Thanks to this feature, the build can automatically re-run failed tests, completely eliminating the problem of erratic tests, and showing as failed only tests that really failed constantly for more than a given amount of runs.

Another situation where this feature has tremendous impact is when you want to  migrate a huge set of tests from sequential execution to parallel execution. If your test were written assuming sequential execution, when you run them in parallel you will have many failures due to interacting tests. You can simply create a build, enable parallel execution and enable re-run failed test. This will simply helps you to spot interacting tests so you can start fixing them so they can run in parallel.

If you look at Figure 2, you can verify that all erratic tests are easily identified, because you can filter all tests that passed on rerun. This will leave your build green, but can spot erratic tests that should be investigated to make them more stable.


Figure 2: Erratic test can be immediately discovered from build output.

I think that this feature is really huge and can tremendously helps running UAT and Integration tests in build and releases.

If you do not see the option in figure 1, please be sure you are actually using the version number 2 of the test runner task as shown in Figure 3.


Figure 3: Be sure to use version 2.* of the test runner, or the option to re-run failed test is not present.

Gian Maria.

Converting regular build in YAML build

YAML build in VSTS / TFS is one of the most welcomed feature in the Continuous Integration engine, because it really opens many new possibilities. Two of the most important advantages you have with this approach are: build definitions will follow branches, so each branch can have a different definition, then, since the build is in the code, everything is audited, you can pull request build modification and you can test different build in branches as you do with code.

Given that build definition in code is a HUGE steps forward, especially for people with strong DevOps attitude, the common question is: how can I convert all of my existing builds in YAML Build?

Edit: This is the nice aspect of VSTS, it moves really forward, I’ve prepared this post while this feature was in preview, and converting was a little bit of effort, now converting a build is really simple, you have a nice View YAML Button during build edit.


Figure 1: YAML definition of an existing build can be seen directly from build editing.

This button will open the corresponding YAML build definition for current build, thus making straightforward to convert existing builds.

This is VSTS, it moves forwards so fast 🙂 and if you want to do a post about a Preview Feature, often you should be really quick, before new functionalities makes the post old.

Old Way (before the View YAML)

While the feature was in early preview, converting is more manual,  we did not have an automatic tool that does this, but if you got a little bit of patience, the operation wasnot so complex.

First of all the engine of the YAML build is the very same of standard build, only the syntax is different, thus, every standard build can be converted in YAML build, you only need to understand the right syntax. This is how I proceeded to convert a definition.

First Step: Install an agent in your local machine, it is a matter of minutes, then delete everything in the _work folder and finally schedule a build with request equal to the name of the agent. This will schedule the regular build in your machine, and in the _work folder you will have a subfolder _tasks that contains all the definition of all the tasks used in your build. This will greatly simplify finding name and version of task used in your build, because you can easily open the folder with Visual Studio Code and you can browse everything.


Figure 2: List of tasks downloaded by the agent to execute the build. 

Then you can simply move to the History tab of the original build, where you can choose to “Compare Difference” so you can easily view the json definition of the build, where you can find all the parameters of the task and the value in the build in a Super Easy way.


Figure 3: Thanks to the Compare difference you can quickly have a look at the actual definition and all the parameters.


Figure 4: All tasks parameters are included in the definition and can be used to generate YAML build file.

Happy building.

Gian Maria.

YAML build in VSTS

One of the most exciting feature that was recently introduced in VSTS is the ability to create YAML Build. You need to enable this feature because it is still in preview and as usual you can enable for your account from the preview feature management


Figure 1: Enable YAML feature for the entire account

After you enable this feature, when you create a new build you can create a build based on YAML.


Figure 2: Create a build based on YAML definition

A build definition based on YAML is nothing more that the standard build system, but instead of using a graphic editor in the browser, you define the task to use and the sequence of tasks directly in a yaml file. To understand the syntax for the YAML file you can directly follow the tutorial in MSDN at this address.

YAML build definition greatly enhance the build system of VSTS, it allows to directly create multiple YAMLS files in code to define builds.

This is a preview and if you want to experiment you need to browse source code of tasks directly from the code in GitHub. I’ll do a more detailed post on how to setup a real build looking up tasks, for this simple introductory post I’ll use a super simple build definition to use a custom task of mine.


Figure 3: Simple build definition with a simple task.

This is not a real build, but it shows how simple is to create a YAML build, just write one task after another and you are ready to go. Now I can create a YAML build and the only option is the name of the script


Figure 4: Simple YAML build, just choose queue, name of the build and the path of YAML definition


Figure 5: Queue a build and verify that your definition was run

Ok, we can agree that the overall experience is still rough, expecially because you need to lookup name and parameter of the tasks directly from code, but I really believe that this is the future. Here are the advantages that I see in this approach

YAML builds uses the very same engine of the standard build. This is one of my favorites, you have a single build engine and YAML definition is only an alternate way to define a build, so you can switch whenever you want, you can write a custom task once and run for both classic build and YAML build and so on.

Each branch have its own definition of the build. When you work with Git it is standard to have many branches and with classic builds, having a build that can work with different branch is not always easy. Thanks to conditional execution you can execute a task only if you are in a specific branch, but this is clumsy. Thanks to the YAML file, each branch can change build definition.

Infrastructure as code. From a DevOps perspective, I prefer to have everything in the code. I can agree that build is not really infrastructure, but the concept is the same. Build definitions are really part of the project and they should be placed in the code.

Moving between accounts. Thanks to Git, moving code between accounts (ex to consolidate accounts), is super easy ut moving builds between accounts is really not an easy process. Thanks to YAML definition, as soon as I move the code, I also moved the builds and this will simplify moving code between accounts.

More to come.

Gian Maria.