Use different Excel TFS / VSTS Addin at the same time

If you are a consultant, it is quite common that you work with various version of TFS Server at the same time. I have my personal account on VSTS, always updated to the latest version, but I have also customer that still uses TFS 2012 or TFS 2010.

Microsoft test newer version of TFS against lots of applications to be sure that newer versions of TFS do not break usage of existing tools. This means that usually you can upgrade your TFS without worrying that your VS 2010 or Visual Basic 6 stops working. You need to be aware that the opposite is not true. This imply that newer version of Visual Studio could not work well with older version of TFS. This decision is done because Microsoft is encouraging people to keep their TFS installation up to date, and it would be a nightmare to always guarantee that newer tools would be able to communicate with the older service API.

To minimize compatibility problems, you should keep your TFS on-premise updated to the latest version.

Tool as Visual Studio are usually not a problem, you can keep as many VS version you want side by side, so if you still use TFS2012 you can still use VS 2012 without any problem. But you can have problems with other tools.

Office TFS addin is installed automatically with Visual Studio Team Explorer or with any version of Visual Studio. This means that whenever you update your VS or you installa new VS version the Office addin is also updated.

Starting from Visual Studio 2015 there is no Team Explorer anymore, if you want to install only the Office addin you can use the standalone installer following links on this post from Brian Harry.

In Italian Visual Studio forum there is a question where a user experienced problem in exporting Work Item Query result to Excel after upgrading to Visual Studio 2015 Update 3. He is able to connect Excel to VSTS, but the addin does not work anymore with on-premise TFS 2012. This situation prove that the addin is working correctly with latest TFS version, but it does not support anymore older TFS version.

The solution to this problem is simple, because you can choose in Excel the addin version you want to use. You just need to go to Excel Settings, then choose Add-ins (1) then manage Com Add-ins (2) and finally press the Go button.

Figure 1: Managing Excel addins from settings pane.

If you scroll down the addin list, you should see several version of the addin for TFS, one for each version of Visual Studio you have installed. In my machine I have VS2012, VS2013 and VS2015 so I have three distinct version of the addin.

Figure 2: Multiple TFS Addin installed if you have multiple version of Team Explorer.

You can understand the version of the addin simply looking at the location, but the cool part is that you can enable more than one addin at the very same time. As a result you have multiple Team ribbon tab in your Excel as shown in Figure 3.

Figure 3: Multiple TFS Addin enabled at the very same time

I need to admit that this is not really a nice situation to have, because you are confused and there is no clear clue to which version of the add-in each tab is referring to, but at least you can use both of them at the very same time. If you prefer you can simply enable an old version (say 2012 version) to make sure that it works with your main TFS instance. Usually if you enable an older version it should be capable of working with newer instance of TFS.

I’ve not tested this technique thoroughly but it should work without problem.

Gian Maria.

Release to Azure with Azure ARM templates

Thanks to new Release Management system in VSTS / TFS creating a release to your on-premise environment is really simple (I’ve described the process here). Another option is creating a test environment in Windows Azure, and if you choose this option life can be even easier.

In this example I’m using Azure as IAAS, deploying a software on a Windows Virtual Machine. While this is probably not the best approach to cloud (PAAS is surely a better approach) to create a test environment it can be perfectly acceptable.

I’m not going to give you an introduction or explanation of Azure Resource Manager because there are tons of resources on the web and Azure moves so quickly that every information I’m going to give you will probably will be old by the time I press “publish” :). The purpose of this post is giving you a general idea on how to use Azure ARM to create a release definition that automatically generates the resource in azure and deploy your software on it.

My goal is using Azure ARM for DevOps and Automatic / Continuous Deployment and the first step is creating a template file that describes exactly all the Azure resource needed to host my application. Instead of starting to write such template file I start checking on GitHub becuse there are tons of template files ready to use.

As an example I took one of the simplest, called 101-vm-simple-windows. It creates a simple Windows Virtual Machine and nothing else. That template has various parameters that can allow you to specify VM Names and other properties, and it can be directly used by a Release Management definition.  I’ve done simple modification to the template file and in this situation it is better to first check if everything is working as expected triggering the deploy process directly from command line.

New-AzureRmResourceGroupDeployment 
	-Name JarvisRm 
	-ResourceGroupName JarvisRm
	-TemplateFile "azuredeploy.json" 
	-adminUsername alkampfer 
	-adminPassword ********** 
	-vmName JarvisCmTest 
	-storageAccount jarvisrmstorage 
	-dnsLabelPrefix jarvisrm

As you can see I need to choose the name of the resource group (JarvisRm) specify the template file (azuredeploy.json) and finally all the paramters of the template as if they are parameter of the PowerShell cmdlet. Once the script finished verify that the resource group was created correctly and all the resources are suitable to deploy your software.

image

Figure 1: Your Resource group was correctly created.

Once everything is correct, I deleted the JarvisRm resource group and I’m ready to use the template on a release definition.

Always test your ARM template directly from command line, to verify that everything is allright. When the resource are created try to use them as manually target of a deploy and only once everything is ok start automating with Release Management.

When you have a good template file the best place to store it is in your source control, this allow you to version this file along with the version of the code that is supposed to use it. If you do not need versioning you can simply store in a network share, but to avoid problem it is better to have Release Management Agent run the template from a local disk and not from a network share.

image

Figure 2: Copy template file from a network share to a local folder of the agent.

First step of the release process is copying template files from a network share to $(System.DefaultWorkingDirectory)\ARM folder so PowerShell can run against script that are placed on local disk. The second task is Azure Resource Group Deployment that uses the template to deploy all Resources to Azure.

image

Figure 3: The Azure Deployment task is used to create a Resource Group from a Template definition.

You should specify only template file (1) and  all the parameters of the template (2) such as userName, password, dns name of the VM etc. As a nice option you can choose to Enable Deployment Prerequisites (3) to have your VM being able to be used as a target for Deploy Action. You can read more about prerequisites on MSDN blog, basically when you select this action the script will configure PowerShell and other option on the target machine to being able to execute script remotely.

Virtual machines need to be configured to be used as target of deploy tasks such as remote PowerShell execution, but the Azure deployment task can take care of everything for you.

This task requires that you already connected the target Azure subscription with your VSTS account. If you never connected your TFS / VSTS account to your Azure subscription with ARM, you can follow the instruction at this link, that contains a PowerShell script that does EVERYTHING for you. Just run the script, and annotate in a safe place all the data you should insert to your TFS / VSTS instance to connect to Azure with ARM.

Antoher aspect you need to take care of, is the version of PowerShell azure tools installed in the machine where the Release Agent is running. Release Management script are tested agains specific version of Azure PowerShell tools, and since the Azure team is constantly upgrading the Tools, it coudl happen that TFS / VSTS Release Management Tasks are not compatible with the latest version of the Azure Tools.

All of these tasks are open sourced, and you can find information directly on GitHub. As an example at this link tjere are the information about the DeployAzureResourceGroup task. If you go to the bottom you can verify the PowerShell tools version suggested to run that task.

image

Figure 4: Supported version of AzureRM module version

Clearly you should install a compatible version in the machine where the Agent is installed. If you are unsure if the agents has a suitable version of Azure PowerShell tools, you can go to TFS Admin page and verify capabilities of the agent directly from VSTS / TFS

image

Figure 5: Agent capabilities contains the version of Azure PowerShell tools installed

Demand for Azure PS is present on Release Definition, but it does not specify the version, so it is not guarantee that your release is going to be successful.

image

Figure 5: Release process has a demands for Azure PowerShell tools but not to a specific version.

As a result I had problem in setting up the Release Process because my agents has PowerShell tools version 1.4 installed, but they are not fully compatible with Release Manager Task. Downgrading the tools solved the problem.

If your release fails with strange errors (such as NullReferenceException) you need to check in GitHub the version of PowerShell tools needed to run that task and install the right version in the agent (or at least you can try to change the version until you find the most recent that works)

The Azure Resource Group Deployments takes care of everything, I’ve modified the base script to apply a specific Network Security Group to the VM but the general concept is that it configures every Azure Resource you need to use. At the end of the script you have everything you need to deploy your software (Virtual Machines, sites, databases, etc).

In my example I need only a VM, and once it is configured I can simply use Copy to Azure VM task and Execute PowerShell on Azure VM Task to release my software, as I did for my on-premise environment.

image

Figure 6: Configuration of the Task used to copy files to Azure VM

You can specify files you want to copy (1) login to the machine (2) and thanks to Enable Copy Prerequisites (3) option you can let the Task takes care of every step needed to allow copy file to the VM. This option is not needed if you already choosen it in the Azure Deployment task, but it can be really useful if you have a pre-existing Virtual Machine you want to use.

Final Step is executing the release script on target machine, and it has the same option you specify to run a script on a machine on-premise.

image

Figure 7: Run the installation PowerShell script on target Azure VM

Once everything is in place you only need to create a release and wait for it to be finished.

image

Figure 8: Output of release definition with Azure ARM

With this example, since I’m using a Virtual Machine, deploy script is the same I used for on-premise release, with PAAS approach, usually you have a different script that target Azure specific resources (WebSites, DocumentDb, etc)

If the release succeeded you can login to portal.azure.com to verify that your new resource group was correctly created Figure 1, and check that in the resource group there are all the expected resources Figure 9.

image

Figure 9: Resources created inside the group.

To verify that everything is ok you should check the exact version of the software that is actually deployed on the environment. From Figure 10 I can see that the release deployed the version 1.5.2.

image

Figure 10: List of all most recents releases.

Now I can login to the VM and try to use the software to verify that it is correctly installed and that the version installed is correct.

image

Figure 11: Software is correctly installed and the version corresponds to the version of the release.

Azure Resource Management is a powerful feature that can dramatically simplify releasing your software to Azure, because you can just download scripts from GitHub to automatically creates all Azure Resources needed by your application, and lets VSTS Release Management tasks takes care of everything.

Gian Maria.

Upgrading SonarQube from 5.1 to 6.0

SonarQube is a really nice software, but for what I experienced it does not play well with Sql Server. Even if Sql Server is fully supported, there are always some little problem in setting everything up, and this is probably due to the fact that most of the people using SonarQube are using MySql as Database Engine.

Today I was upgrading a test instance from version 5.1 to 6.0, I’ve installed the new version, launched database upgrade procedure, and after some minutes the upgrade procedure stopped with a bad error

image

Figure1: Databse error during upgrade.

Remember that SonarQube upgrade procedure does not have a rollback procedure, so it is mandatory that you take a full backup of the system before performing the upgrade.

From what I’ve learned in previous situation, whenever something does not work in SonarQube, you need to look at the log. The above error message is really misleading, because it seems that there were some connection problems. Reading the log I discovered that the error is really different.

2016.08.11 19:24:00 ERROR web[o.s.s.d.m.DatabaseMigrator] Fail to execute database migration: org.sonar.db.version.v60.CleanUsurperRootComponents
com.microsoft.sqlserver.jdbc.SQLServerException: Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CS_AS" and "Latin1_General_CS_AS" in the equal to operation.

Whenever you have problem with SonarQube do not forget to read the log, because only in the log you can understand the real cause of errors.

Collation problem are quite common with SonarQube, documentation is somewhat not correct, because it tell you that you need to use an Accent and Case sensitive collation, but does not specify the collation. My SonarQube 5.1 instance worked perfectly with SQL_Latin1_General_CP1_CS_AS, but sadly enough, script to upgrade db to version 6.0 fails because it is expecting Latin1_General_CS_AS.

If you install SonarQube with Sql Server, it is better to chose Latin1_General_CS_AS as collation to avoid problems.

Luckly enough you can change database collation for an existing database, the only caveat is that you need to set the database in Single User Mode to perform this operation.

USE master;  
GO  
  
ALTER DATABASE Sonar SET SINGLE_USER WITH ROLLBACK IMMEDIATE; 
GO 

ALTER DATABASE Sonar  COLLATE Latin1_General_CS_AS ;  
GO 

ALTER DATABASE Sonar SET MULTI_USER; 
GO  

Clearly you should have a backup of your database before the migration, or you will end with a corrupted database and nothing to do. So I restored the database from the backup, run the above script, restart SonarQube and try to perform the upgrade again.

image

Figure 2: Database is upgraded correctly

My instance is now running 6.0 version.

Gian Maria.

Manage Environment Variables during a TFS / VSTS Build

To avoid creating unnecessary build definition, it is a best practice to allow for parameter overriding in every task that can be executed from a build. I’ve dealt on how to parametrize tests to use a different connection string when tests are executed during the build and I’ve used Environment variables for a lot of reasons.

Environment variables are not source controlled, this allows every developer to override settings in own machine without disturbing other developers. If I do not have a MongoDb in my machine I can simply choose to use some other instance in my network.

image

Figure 1: Overriding settings with environment variables.

Noone in the team is affected by this settings, and everyone has the freedom of changing this value to whatever he/she like. This is important because you can have different version of MongoDb installed in your network, with various different configuration (MMapV1 or Wired Tiger) and you want the freedom to choose the instance you want to use.

Another interesting aspect of Environment variables, is that they can be set during a VSTS/TFS build directly from build definition. This is possible because Variables defined for a build were set as environment variables when the build runs.

image

Figure 2: Specifing environment variables directly from Variables tab of a bulid

If you allow this value to be allowed at Queue Time, you can set the value when you manually queue a build.

image

Figure 3: Specifying variables value at Queue Time

If you look at Figure 3, you can verify that I’m able to change the value at queue time, but also, I can simply press “Add variable” to add any variable, even if it is not included in build definition. In this specific situation I can trigger a build and have my tests run against a specific MongoDb instance.

Remember that the value specified in the build definition overrides any value that is set on environment variables on build machine. This imply that, once you set a value in the Build definition, you are not able to change the value for a specific build agent.

It you want to be able to choose a different value for each build agent machine you can simply avoid setting the value on the Variables tab and instead define the variable on each build machine to have a different value for each agent. Another alternate approach is using two Environment variables, Es: TEST_MONGODB and TEST_MONGODB_OVERRIDE, and configure your tests to use TEST_MONGODB_OVERRIDE if present, if not present use TEST_MONGODB. This allows you to use TEST_MONGODB on build definition, but if you set TEST_MONGODB_OVERRIDE for a specific test agent, that agent will use that value.

Another interesting aspect of Environment Variable is that they are included in agent capabilities, as you can see from Figure 4.

SNAGHTMLd39383

Figure 4: All environment variables are part of agent Capabilities

This is an important aspect because if you want that variable to be set in the agent, you can avoid to include in Variables tab, and you can require this build to be run on an agent that has TEST_MONGODB environment variable specified.

image

Figure 5: Add a demand for a specific Environment Variable to be defined in Agent Machine

Setting the demands is not always necessary, in my example if the TEST_MONGODB variable is not definied, tests are executed against local MongDb instance. It is always a good strategy to use a reasonable default if some setting is not present in Environment Variables.

Gian Maria.

Scale out deployment error when migrating Reporting Services

Part of moving your TFS Server on new hardware, or creating a Pre-Production environment is restoring Reporting Server database. Since Datatbase are encrypted, if you simply restore the database then configure Reporting Services on new machine to use restored database, this operation will fail, because the new server cannot read encrypted data.

Restoring Reporting Services database into a new instance involves some manual steps, especially you need to backup / restore encryption keys from your old server to the new one.

Clearly you should have a backup of your encryption key, this is a part of a good backup process, and it is automatically performed by the standard TFS Backup wizard. If you never backupped your encryption key I strongly suggest to DO IT NOW. The backup procedure can be done manually from Sql Server Reporting Service Configuration Manager.

image

Figure 1: Backup your encryption key

You should choose a strong password and then you can save the key in a simple file that can be moved and imported on the machine where you restored your Reporting Services database. On that server you can simple use the Restore feature to restore the encryption key so the new installation is now able to read encrypted data from database.

If the name of the machine is changed, as an example if you perform a restore on a Test Environment, when you try to access your reporting server instance you probably will got this error

The feature: “Scale-out deployment” is not supported in this edition of Reporting Services. (rsOperationNotSupported)

This happens because the key you restored belongs to the old server, and now your Reporting Server instance believe that it is part of a multi server deployment, that is not supported in standard edition of Reporting Services.

image

Figure 2: Scale out Deployment settings after key is imported.

In Figure 2 you can verify that, after importing the encryption key, I have two server listed, RMTEST is the name of the machine where I restored the DB while TFS2013PREVIEWO is the name of the machine where Reporting Service was installed. In this specific scenario I’m doing a clone of my TFS Environment in a Test Sandbox.

Luckly enough there is this post that explain the problem and gives you a solution. I must admit that I dot feel really confortable in manually manipulation of Reporting service database, but the solution always worked for me. As an Example, in Figure 3 you can see that I have two entries in the Keys table of reporting database

image

Figure 3: Keys table contains both entries for keys.

After removing the key of TFS2013PREVIEWO from the database the Scale Out settings come back to normal, and reporting services starts working.

image

Figure 4: Reporting services are now operational again.

Gian Maria.