Tfx error: Failed to find api location for area

Tfx-cli is a cross platform command line for TFS / VSTS that can be used to accomplish various tasks. To connect to your favorite instance all you have to do is generate a Personal Access Token and use the command

tfx login

You will be prompted for the URL of the server and your Personal Access Token to access the server. In Figure 1 I connected to my VSTS account.


Figure 1: Perform login with tfx-cli utility

Now if I perform a command, Es: tfx build list, it asks me the name of the Team Project to use and then I got this error: Failed to find api location for area: build id: 0cd358e1-9217-4d94-8269-1c1ee6f93dcf

The reason for the above error is a wrong specification of the URL of the service to use.

When you login with tfx-cli you need to be sure to specify a valid Project Collection URL, not omitting the name of the Project Collection

In the example above, I’ve missed DefaultCollection from the Service URL, and this will generate the above error. The problem is: Tfx-cli routine returns you a Logged in Successfully result, because it is able to connect to the server, but then you are not able to execute commands, because the address is incorrect.

To fix the situation you need to logout using tfx logout then issue a tfx login this time specifying the full URL to your collection Es: 

Gian Maria.

Facebook Code Generator on Windows Phone

If you care about your online security, you would probably enable two factor authentication on every service that supports it. If you enable in facebook you would be prompted to enter a code of your Code Generator application. If you do not know how to obtain such code and ask for helps the site tells you that it is available on your Phone application.

Sadly enough, it seems that Windows Phone Facebook app does not have this option. The simplest solution is using Microsoft Authenticator app, as described in this article.

The algorithm behind secure code generation is a standard, and Microsoft Authenticator Windows Phone app works perfectly even for services that ask you for a Google generator.

Gian Maria.

Analyze your project with SonarQube in TFS Build vNext

When you have your SonarQube server up and runinng it is time to put some data into it. You will be amazed to know how simple it is with build vNext and Visual Studio Online.

Installing the analyzer

As a prerequisite, you need to install Java on the machine where the agent is running, then download the Msbuild.SonarQube.Runner, unblock the zip and unzip everything in a folder of your HD. Make sure that PATH contains that folder so you are able to launch the runner from any command prompt.

Then open the SonarQube.Analysis.xml file, and change configuration.

Figure 1: Configuration of Msbuild SonarQube runner

Remember that you need to open firewall port of the server where SonarQube database is running, because the agent will connect directly to the database to save result of the analysis. This is a common source of errors, because you can incorrectly think that the agent is capable to talk with the server through an endpoint (it will be available in next versions of SonarQube)

Pay attention because the Agent directly save result on Database.

Remember also to install the C# plugin or whatever plugin you need (Ex Visual Basic .NET) to support the language/technology you are using.

Manual analysis

On the machine where the vNext agent is running you need to be sure that everything is ok. Just open a command prompt and navigate on a folder where you have the project you want to analyze (you can use _work folder of the agent). Once you are in a folder where you have a .sln file you should start analysis with this command.


This command will connect to SonarQube server and will download analysis profile. Then you should launch msbuild to compile the solution and finally you should do the real analysis running the end command

msbuild.sonarqube.runner /key:JarvisConfigurationManager end

Be sure to verify that everything works with manual analysis, because it will require less time to troubleshoot problems

If everything goes well you should see some data inside your SonarQube server. Doing manual analysis is a must so you are sure that Java is installed correctly, firewall ports are ok, DNS names are ok and so on. Once you can do a manual analysis you are 99% sure that the analysis will be good even during the build.

Running in Buid vNext

If everything is ok, I just suggest tagging the agent with SonarQube tag, to identify this agent as capable of doing SonarQube analysis.

With custom capability we can identify the agents that can do specific tasks

Figure 2: Adding custom capability to the agent

Now the build must be changed to require this specific capability for the agent.

Figure 3: Adding Demands on the build to request specific capabilities

Using custom capability is a good way to communicate to people that someone did manual testing of SonarQube runner on that machine, so you can be pretty sture that the build will not encounter problems.

Using custom demands will make your life easier because you are explicitly telling what that agent can do.

Now you can customize the build to launch the above two command line script to do the analysis, as you manually did before. You can do similar steps if you are using XAML Build, just add a script to launch start analysis pre build and the end after tests ran.

But if you are using build vNext you will be happy to know that SonarQube runner tasks were already present in VSO/TFS vNext.


Figure 4: Configure SonarQube analysis in your build.

Only begin analysis task needs configuration, and you needs only to specify the same informations you saved in SonarQube.Analysis.Xml file. Since I’m using the build where I’ve configured Semantic Versioning with GitVersion I’ve also a build variable called AssemblyVerision that is automatically set by GitVersion and I can use it to specify the version to SonarQube.

I can now schedule a build, and verify the output. First of all the output of the Begin Analysis task should connect correctly to the server and download the profile.

Figure 5: Output of the Start task for SonarQube Analysis

The output of the end step, should contains a really longer log, because it is when the Real Analysis is done on your code.

Figure 6: Analysis took 45 seconds to complete

It is important that the end analysis task is the last one, because sonar analyzer is capable of understanding code coverage result from your unit testing, a metric that is controversial, but gives you a nice idea on the amount of Unit Testing that the project contains.

Figure 7: Code coverage result is correctly saved in Sonar Qube

Thanks to automatic versioning, you have also a better timeline of the status of your project.

Figure 8: Versioning correctly stored inside Sonar Qube

The entire setup should not take you more than 30 minutes.

Gian Maria.

Installing SonarQUBE on windows and SQL Server

Installing SonarQube on windows machine with Sql Server express as back end is quite simple, but here is some information you should know to avoid some common problem with database layer (or at least avoid problem I had :) )

Setting up Sonar Qube in Windows is easy, but sometimes you can encounter some problem to have it talk to Sql Server database.

First of all I avoid using integrated authentication in SQL Server, because I find easier to setup everything with a standard sql user. After all my instance of SQL Express is used only for the instance of Sonar. So I create a user called sonar with a password, and remove the check for password expiration.

Figure 1: Avoid password expiration

Then you should open Sql Server Configuration Manager, and you must enable the TCP Procol.

Figure 2: Enable TCP/IP protocol in Sql Server Configuration Manager

Now I found that Java drivers are a little bit grumpy on connecting with my database, so I decided to explicitly disable dynamic port and specify the 1433 port directly. Just double click the TCP/IP protocol shown in Figure 2 to open properties for TCP/IP protocol.

Figure 3: Disable dynamic ports and specify 1433 as TCP Port

Now create a new database called Sonar, and set user sonar as owner of the database. Place specific attention to case of Database Name. I choose Sonar with capital S.

Figure 4: Create a new database called Sonar with sonar user as owner

Now be sure to select the correct Collation, remember that you should use a collation that is Case Sensitive and Accent Sensitive, like SQL_Latin1_General_CP1_CS_AS.

Figure 5: Specify the right Collation for the database. It should be CS and AS

Now, just to be sure that everything is ok, try to connect from Management Studio using the port 1433 and with user sonar. To specify port you should use a comma between server name and the port.

Figure 6: Try to connect to the server with user sonar and port 1433

Verify that you can see Sonar database. If you are able to connect and see Sonar Db you have everything ready. Remember to download JDBC driver for MsSql at this address, once downloaded be sure to right click the zip file and in properties section unblock the file. Then unzip the content and copy the file jtds-1.3.1.jar into subfolder extensions\jdbc-driver\mssql of your Sonar Installation (the mssql folder usually does not exists and you should manually create it).

Now you should edit conf/ file and add connection string to the database.


Place specific attention to database name in connection string because it is CASE SENSITIVE, as you can see I specified sqlserver://localhost/Sonar with capital S exactly as database name. Since you are using Accent Sensitive and Case Sensitive collation is super important that the name of the database is equal in casing to the name used in connection string. If you specify wrong casing, you are not able to start Sonar and you will find this error in the sonar log.

The error: The database name component of the object qualifier must be the name of the current database. Happens if you use wrong casing in Db Name in connection string

You should be able now to start Sonar without any problem.

Gian Maria

Integrating GitVersion and Gitflow in your vNext Build

In previous article I’ve showed how to create a VSO build vNext to automatically publish a nuget package to Myget (or nuget) during a build. [Publishing a Nuget package to Nuget/Myget with VSO Build vNext]. Now it is time to create a more interesting build that automatically version your assemblies and nuget packages based on GitFlow.

GitFlow and GitVersion

GitFlow is a simple convention to manage your branches in your Git repository to support a production branch, a developement branch and Feature/Support/Release/hotfix branches. If you are completely new on the subject you can find information at the following locations:

You can find also a nice plugin for Visual Studio that will support GitFlow directly from Visual Studio IDE and also install GitFlow for your command line environment in one simple click. [VS 2015 version] [VS 2013 version]. Once you get accustomed with GitFlow, next step is having a look at Semantic Versioning,  a simple versioning scheme to manage your packages and assemblies.

The really good news, is that a free tool called GitVersion exists to do semantic versioning simply examining your git history, branches and tags. I strongly suggest you to read documentation for GitVersion online, but if you want a quick start, in this blog post I’ll show you how you can integrate with a vNext VSO build.

Thanks to GitVersion tool you can easily manage SemVer in a build vNext with little effort.

How GitVersion works at a basic level

GitVersion can be downloaded in the root folder of your repository; once it is there invokeing directly from command line with /ShowConfig parameter will generate a default configuration file.

GitVersion /ShowConfig > GitVersionConfig.yaml

This will create a default configuration file for GitVersion in the root directory called GitVersionConfig.yaml. Having a configuration file is completely optional, because GitVersion can work with default options, but it is really useful to explicit default parameter to know how Semantic Versioning is handled by the tool.

I’m not going throught the various options of the tool, you can read the doc online and I’ll blog in future post about a couple of options I usually change from default.

For the scope of this article, everything I need to know is that, invoking gitversion.exe without parameters in a folder where you have a git repository with gitflow enabled will return you a Json data. Here is a possible example:


This is the result of invoking GitVersion in develop branch; and now it is time to understand how these version numbers are determined.  My Master Branch is actually tagged 1.4.1 and since develop will be the next version, GitVersion automatically increments Minor versioning number (this is the default and can be configured). FullSemVer number contains the suffix unstable.9 because develop branch is usually unstable, and it is 9 commits behind the master. This will immediately gives me an idea on how much work is accumulating.

Now if I start a release 1.5.0 using command git flow release start 1.5.0  a new release/1.5.0 branch is created, and running GitVersion in that branch returns a FullSemVer of The suffix is beta, because a release branch is something that will be released (so it is a beta) and 0 means that it is 0 commits behind develop branch. If you continue to push on release branch, the last number continues to increment.

Finally when you finish the release, the release branch is merged with master, master will be tagged 1.5.0 and finally release branch is merged back to develop. Now running GitVersion on develop returns version because now master is on 1.5.x version and develop will be the next version.

How you can use GitVersion on build vNext

You can read about how to integrate GitVersion with build vNext directly on GitVersion documentation, but I want to show you a slightly different approach in this article. The way I use GitVersion is, directly invoking in a Powershell build file that takes care of everything about versioning.

The main reason I choose this approach is: GitVersion can store all the information about versioning in environment variables, but in build vNext environment variables are not maintained by default between various steps. The second reason is: I already have a bulid that publish on nuget with build number specified as build variable, so I’d like to grab version numbers in my script and use it to change variable value of my build.

Thanks to PowerShell, parsing Json output is super easy, here is the simple instructions I use to invoke GitVersion and parse all json output directly into a PowerShell variable.

$Output = & ..\GitVersion\Gitversion.exe /nofetch | Out-String
$version = $output | ConvertFrom-Json

Parsing output of GitVersion inside a PowerShell variable gives you great flexibility on how to use all resulting numbers from GitVersion

Then I want to version my assemblies with versions returned by GitVersion. I starts creating some Powershell variables with all the number I need.

$assemblyVersion = $version.AssemblySemver
$assemblyFileVersion = $version.AssemblySemver
$assemblyInformationalVersion = ($version.SemVer + "/" + $version.Sha)

I’ll use the same PowerShell script I’ve described in this post to version assemblies, but this time all the versioning burden is taken by GitVersion. As you can see I’m using also the AssemblyInformationalVersion attribute that can be set as any string you want. This will give me a nice file version visible from Windows.


Figure 1: Versioning of the file visible in windows.

This immediately tells me: this is a beta version and gives me also the SHA1 of the commit used to create the DLL, maximum traceability with minimum effort. Now is time to use some build vNext commands to version nuget.

How Build vNext can accepts command from powershell

Build vNext infrastructure can accept commands from PowerShell script looking at the output of the script, as described in this page.

One of the coolest feature of build vNext is the ability to accept commands from console output of any script language

Write-Output in powershell is all that I need to send commands to build vNext engine. Here is how I change some build variables:

Write-Output ("##vso[task.setvariable variable=NugetVersion;]" + $version.NugetVersionV2)
Write-Output ("##vso[task.setvariable variable=AssemblyVersion;]" + $assemblyVersion)
Write-Output ("##vso[task.setvariable variable=FileInfoVersion;]" + $assemblyFileVersion)
Write-Output ("##vso[task.setvariable variable=AssemblyInformationalVersion;]" + $assemblyInformationalVersion)

If you remember my previous post on publishing Nuget Packages, you can see that I’ve used the NugetVersion variable in Build Definition to specify version number of nuget package. With the first line of previous snippet I’m automatically changing that number to the NugetVersionV2 returned by GitVersion.exe. This is everything I need to version my package with SemVer.

Finally I can use one of these two instructions to change the name of the build.

Write-Output ("##vso[task.setvariable variable=build.buildnumber;]" + $version.FullSemVer)
Write-Output ("##vso[build.updatebuildnumber]" + $version.FullSemVer)

The result of these two instructions is quite the same, the first one change build number and change also the variable build.buildnumber, while the second one only changes the number of the build, without changing the value of build.buildnumber variable.

Final result

My favourite result is: my build numbers now have a real meaning for me, instead of simply representing a date and an incremental build like 20150101.2.


Figure 2: Resulting builds with SemVer script

Now each build name immediately tells me the branch used to create the build, and the version of the code used. As you can see, release and master branches are in continuous deployment, because at each push the build is triggered and nuget package is automatically  published. For Develop branch the build is manual, and is done only when I want to publish a package with unstable version.

I can verify that everything is ok on MyGet/Nuget side, packages were published with the correct numbers.


Figure 3: SemVer is correctly applied on NuGet packages

Thanks to GitVersion I can automatically version: build number, all the assemblies and NuGet package version with few lines of powershell.


Thanks to build vNext easy of configuration plus powershell scripts and simple command model with script output, with few lines of code you are able to use Semantic Versioning in your builds.

This example shows you some of the many advantages you have with the new build system in VSO/TFS.

Gian Maria.