Exclude Folders from SonarQube analysis

Creating a build that is capable of perform a SonarQube analysis on a VSTS /  TFS is a really simple task, thanks to the two tasks that are present out-of-the box.

image

Figure 1: Build that uses Sonarqube tasks to perform analysis

The problem in a project that was alive for more than a couple of years is that you usually have a really bad report when you do your first analysis. This happens because, without a constant analysis, the code have surely some smells.

Sometimes you get really discouraged, because the number of issue is really high, but before losing any hope, check if the errors are really part of your code. In a project where I’m working, we got really bad numbers and I was 100% sure that it is not a problem of our code.

When you analyze your project for the first time, sometimes the number of issue is so high that you really are discouraged. Before giving up, check if the errors are really part of your code.

To diagnostic the problem, simply login to the project, then go to Code view (something like http://build:9000/code/?id=projectName), then you will see a summary of all the bugs, but unfortunately you cannot order for the number of the bug, so Just scroll down to see the part of the code with the most errors.

SNAGHTML139609

Figure 2: 185 bugs are located in scripts folder

In this situation, we have 185 bugs signaled by the javascript analyzer under the scripts folder, but in that folder we have our code but also third party code. This is not the very first analysis, because the first analysis shows a folder where there are more than 3k of errors and it was the folder that stores all third party javascript libraries.

If you do not use npm it is quite normal to have third party javascript code in your source code and if you are using SonarQube you absolutely need to configure it to exclude that directories. Just go to the administration page of the project and in the Analysis Scope you find the Source File Exclusions section that allows you to exclude some directories from the analysis.

SNAGHTML16d611

Figure 3: Exclude folder for analysis

In your situation the vast majority of errors comes from the angular library, from all the script of the skin we use and for third party libraries stored under the /app/scripts/lib folder. After exclusion, the number of bugs dropped from almost 7k to 500.

If you add Sonarqube analysis to existsting project that have third party javascript code in source code repository, please always check where the errors are and exclude folder accordingly.

Gian Maria.

Decide to publish artifacts on Server or Network Share at queue time

A build usually produces artifacts and thanks to the extreme flexibility of VSTS / TFS Build system, you have complete freedom on what to include as artifacts and where you should locate them.

Actually you have a couple of options out of the box, a network share or directly on your VSTS / TFS Server. This second option is especially interesting because you does not need a special network share and you do not have permission issue (every script or person that can access the server and have build permission can use the artifacts). Having everything (build result and artifacts) on the server simplify your architecture, but it is not always feasible.

Thanks to VSTS / TFS  you can store artifacts of builds directly in your server, simplifying the layout of your ALM infrastructure.

The main problem with artifacts stored in the server happens with VSTS, because if you have really low upload bandwidth, like me, it is really a pain to wait for my artifacts to be uploaded to VSTS and is is equally a pain to wait for my releases to download everything from the web. When your build server is local and the release machine, or whatever tool need to consume build artifacts is on the same network, using a network share is the best option, especially if you have good Gigabit network.

The option to “where to store my artifacts” is not something that I want to be embedded in my build definition, I want to be able to choose at queue time, but this is not possible, because the location of the artifacts cannot be parameterized in the build.

The best solution would be to give user the ability to decide at queue time where to store artifacts. This will allow you to schedule special build that store artifacts on a network share instead that on server

The obvious and simple solution is to use Two “Publish Artifacts” task, one configured to store artifacts on the server, the other configured to store artifacts on a network share. Then create a simple variable called PublishArtifactsOnServer and run the Publish Artifacts configure to publish on the server only when this value is true.

SNAGHTML7ebdcd

Figure 1:

In Figure 1 there is the standard configuration of a Publish Artifact task that stores everything on the server and it is run on the custom condition that the build is not failed and the PublishArtifactsOnServer is true. Now you should place another Publish  Artifacts task configured to store drops on a network share.

SNAGHTML80160a

Figure 2: Another Publish Artifacts task, configured to run if PublishArtifactsOnServer is not true

In Figure 2 you can verify that the action is configured with the very same option, the only differences are the Artifact Type that is on a File Share with the corresponding network share and the Custom condition that runs this action if the build is not failing and if the PublishArtifactsOnServer variable is NOT equal to true.

This is another scenario where the Custom Conditions on task can allow you for a highly parameterized build, that allows you to specify at queue time if you want your artifacts stored on the server or onto a network share. The only drawback to this solution is that you need to duplicate your tasks, but it is a really simple things to do. Now if you queue a build where PublishArtifactsOnServer is true, you can verify that your artifats are indeed stored on server.

2017-07-07T16:28:21.0609004Z ##[section]Starting: Publish Artifact: primary drop 
2017-07-07T16:28:21.0618909Z ==============================================================================
2017-07-07T16:28:21.0618909Z Task         : Publish Build Artifacts
2017-07-07T16:28:21.0618909Z Description  : Publish Build artifacts to the server or a file share
2017-07-07T16:28:21.0618909Z Version      : 1.0.42
2017-07-07T16:28:21.0618909Z Author       : Microsoft Corporation
2017-07-07T16:28:21.0618909Z Help         : [More Information](https://go.microsoft.com/fwlink/?LinkID=708390)
2017-07-07T16:28:21.0618909Z ==============================================================================
2017-07-07T16:28:22.6377556Z ##[section]Async Command Start: Upload Artifact
2017-07-07T16:28:22.6377556Z Uploading 2 files
2017-07-07T16:28:27.7049869Z Total file: 2 ---- Processed file: 1 (50%)
2017-07-07T16:28:32.7375546Z Uploading 'Primary/xxx.7z' (14%)
2017-07-07T16:28:32.7375546Z Uploading 'Primary/xxx.7z' (28%)
2017-07-07T16:28:32.7375546Z Uploading 'Primary/xxx.7z' (42%)
2017-07-07T16:28:37.7827330Z Uploading 'Primary/xxx.7z' (57%)
2017-07-07T16:28:37.7827330Z Uploading 'Primary/xxx.7z' (71%)
2017-07-07T16:28:40.3306829Z File upload succeed.
2017-07-07T16:28:40.3306829Z Upload 'C:\vso\_work\13\a\primary' to file container: '#/534950/Primary'
2017-07-07T16:28:41.4309231Z Associated artifact 214 with build 4552
2017-07-07T16:28:41.4309231Z ##[section]Async Command End: Upload Artifact
2017-07-07T16:28:41.4309231Z ##[section]Finishing: Publish Artifact: primary drop 

As you can see from the output of the task, the task uploaded the artifacts to the server. Now you can schedule another build with PublishArtifactsOnServer to false and you should see that the other task was executed, now the artifacts are on a network share.

Gian Maria

Hyper-V and Windows AutoLogon

When you configure build agents and especially when you configure Release Agents for VSTS, it is quite normal to have some installations where you want to use AutoLogon. This is needed whenever you want to run integration tests that needs to interact with the UI. Having autologon enabled avoid the need to manually login and start the agent when the machine is rebooted, because you always have a user session opened that runs the agent.

There are lots of articles, like this one that explain how to configure everything, but last Saturday I had a problem, because I had a Windows Server 2016 where that technique does not work. I rebooted the machine, but the hyper-v console shows me the login pane and I was really puzzled.

image

Figure 1: Login screen in Hyper-V console.

As you can see from Figure 1, the Hyper-V console shows the login page, so I incorrectly believed that the autologon did not work. I said incorrectly because trying to troubleshoot the problem, a friend told me to check the Hyper-V manager console, and here is what I see

image

Figure 2: Small Thumbnail of the VM in Hyper-V snap-in.

From Figure 2 you can see that the Thumbnail of the VM does not show the login page, but it shows a user session logged into the machine. A quick check confirmed me that the agent was online, so the Automatic Logon worked, but my Virtual Machine console still shows me the logon screen.

The reason is in the Enhanced session feature, available in the View menu of the Virtual Machine console. The Enhanced session is used to allow windows resizing, clipboard transfer and so on and uses Remote Desktop under the hood. If you turn off Enhanced Session you use the basic Hyper-V console, that shows you that a user is really connected to the system.

image

Figure 3: The console in basic session mode correctly shows the logged user

It turns out that, with enhanced mode, you are not able to see the session that is started automatically, but the session is active. If you really want to verify what is happening, you can simply switch to basic mode.

Gian Maria.

Deploy test agent and run functional test tasks

In VSTS / TFS Build there are a couple of tasks that are really useful to execute UAT or Functional tests during a build. The first one deploy the test agent remotely on a target machine while the second one runs a set of tests on that machine using the agent.

If you use multiple Run Functional Test task, please be sure that before each task there is a corresponding Deploy test agent tasks or you will get an error. Actually I have a build that run some functional tests, then I’ve added another Run Functional Test task to run a second sets of functional tests. The result is that the first run does not have a problem, while the secondo one fails with a strange error

2017-07-13T17:59:51.1964581Z DistributedTests: build location: c:\uatTest\bus
2017-07-13T17:59:51.1964581Z DistributedTests: build id: 4797
2017-07-13T17:59:51.1964581Z DistributedTests: test configuration mapping: 
2017-07-13T17:59:52.4924710Z DistributedTests: Test Run with Id 1697 Queued
2017-07-13T17:59:52.7134843Z DistributedTests: Test run '1697' is in 'InProgress' state.
2017-07-13T18:00:02.9198747Z DistributedTests: Test run '1697' is in 'Aborted' state.
2017-07-13T18:00:12.9219883Z ##[warning]DistributedTests: Test run is aborted. Logging details of the run logs.
2017-07-13T18:00:13.1270663Z ##[warning]DistributedTests: New test run created.
2017-07-13T18:00:13.1280661Z Test Run queued for Project Collection Build Service (prxm).
2017-07-13T18:00:13.1280661Z 
2017-07-13T18:00:13.1280661Z ##[warning]DistributedTests: Test discovery started.
2017-07-13T18:00:13.1280661Z ##[warning]DistributedTests: UnExpected error occured during test execution. Try again.
2017-07-13T18:00:13.1280661Z ##[warning]DistributedTests: Error : Some tests could not run because all test agents of this testrun were unreachable for long time. Ensure that all testagents are able to communicate with the server and retry again.

This happens because the remote test runner cannot be used to perform a second runs if a previous run just finished. The solution is simple, add another Deploy Test Agent task.

image

Figure 1: Add a Deploy Test Agent before any Run Functional Tests task

This solved the problem and now the build runs perfectly.

Gian Maria.

Add a capability to agent in a Deployment Group

When you deploy a Build agent in VSTS / TFS, in the administration page you have the ability to add custom Capabilities to the agent, as you can see in Figure 1.

image

Figure 1: Adding capabilities to a standard build agent.

With the new Release Management, you can install agents in machine that will be added to Deployment Groups. If you look at the UI, you can see that the capabilities tab is listing all the capabilities of the agent, but you have not the option to specify custom capabilities.

If you need to add some capabilities, as an example you want to add the VSTest capability because Test Agent was installed manually, you can simply add an Environment Variable in the machine. The agent will translate all the environment variables in Agent Capabilities for you.

As an example here is the result of a release

[Error]Unable to deploy to the target 'JVSTSINT' as some of the following demands are missing:
 [DotNetFramework, vstest, Agent.Version 

Environment variables added to the Machine will be added to the capabilities of the agent installed on that machine.

In that specific scenario I do not have vstest capabilities, even if Test runner was installed by the build with WinRm, so I simply added the environment variable in the machine.

image

Figure 2: VSTest environment variable added to the machine

Then restart the service of the agent and the new capabilities is now showing up in the summary.

SNAGHTMLc29ccd

Figure 3: VSTest capabilities now shows up in the agent capabilities list.

This will allow you to specify any capabilities you want in agents installed in machines on Deployment Groups for VSTS Release management.

Gian Maria