Isolate your TFS Pre-Production environment for maximum security

In previous post I’ve explained how to create a clone of your TFS Production environment thanks to the new TFS “15” wizard. With this post I want to share you a simple solution I have in my Bags of Tricks to avoid your TFS Cloned environment to interfere with production environment.

The problem

In my environment I have all machines in network 10.0.0.0/24, my TFS has address 10.0.0.116 and Primary Domain Controller is 10.0.0.42. Then I have automated build and Release Management Definitions that deploy against various machines, : 10.0.0.180, 10.0.0.181, 10.0.0.182, etc.

Even if I used the wizard, or Command Line instructions to change TFS Servers id, there is always the risk that, if a build starts from cloned environment, something wrong will be deployed to machines used by production environment (10.0.0.180, etc).

Usually the trick of changing hosts files in PreProduction TFS machines is good if you always use machine names in your build definition, but if I have a build that directly deploy to 10.0.0.180 there is nothing to do. This exposes me at risk of production environment corruption and limits my freedom to freely use cloned TFS environment.

What I want is complete freedom to work with cloned TFS Environment without ANY risk of accessing production machines from any machine of Cloned Enviroments (Build controllers, test agents, etc).

Virtualization to the rescue

Instead of placing pre-production environment in my 10.0.0.0/24 network, I use Hyper-V virtual networking capabilities to create an internal network.

image

Figure 1: Virtual networks configured in Hyper-V hosts

In Figure 1 I depicted what I have clicking the Virtual Switch Manager setting (1), and I have a virtual switch called “Internal Network” (2) that is configured as internal network (3). This means that this network can be used by all VM to communicate between them and with the host, but there is no possibility to communicate to the real Production Network. The Physical network card of Hyper-V host is bound to a standard “External Network”, it is called “ReteCablata” (4) and it is the network that can access machines in Production Network.

With this configuration I decided to install all machines that will be used for TFS Pre Production (server, build, etc) using only the “Internal Network”. The machine I’ll use as Pre Production TFS has the address 10.1.0.2, while the Hyper-V host will have the 10.1.0.254 address. This allows for my Hyper-V hosts to communicate with the Virtual Machine through the Internal Network virtual network interface.

Now if I try to login to the machine with domain credentials I’ve got a bad error as result.

image

Figure 2: I’m unable to login with domain users, because domain controller is unavailable.

Accessing with local user is good, and the reason why I cannot login as domain user is because the machine is not able to reach the domain controller, since it lives in another virtual network.

Thanks to this solution I’ve created an isolated subnetwork where I can create my TFS Pre-production / Test environment without the risk of corrupting the Production Environment

Thanks to Virtual Networking it is easy to create a virtual network completely isolated from your production environment where you can safely test cloned environment

Iptables to route only what-you-want

At this point I have an isolated environment, but since it cannot access my domain controller, I have two problems:

1) PreProduction / Test Tfs cannot access the domain, and no domain users can access TFS
2) To access the PreProduction / Test TFS you can only use the Hyper-V host.

Clearly this made this approach almost impraticable, but the solution to this limitation is really really quick. Just install a Linux machine in the Hyper-V host to act as a router, in my example I have a standard Ubuntu Cloud server without UI. The important aspect is that you need to assign both Virtual Networks to the machine, so it can connect with both your isolated environment “Internal Network” and production environment “Rete Cablata”.

image

Figure 3: Create a Linux VM and be sure to assign both network interfaces.

In my box the physical network (ReteCablata) is eth0 while the internal network is eth1, both interface have static ip, and this is the configuration.

gianmaria@linuxtest1:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
     address 10.0.0.182
     netmask 255.255.255.0
     gateway 10.0.0.254
     dns-nameservers 10.0.0.42

auto eth1
   # iface eth1 inet dhcp
   iface eth1 inet static
   address 10.1.0.1
   netmask 255.255.255.0
   network 10.1.0.0

The configuration is simple, this machine has 10.0.0.182 ip in my production network (eth0), and it the 10.1.0.1 ip in the internal virtual network (eth1). Now I configured all Windows machines in the internal virtual network to use this machine as gateway.

image

Figure 4: Configuration for Pre-Production TFS Machine

The important aspect is that it is using the ip in eth1 of linux machine as gateway (10.1.0.1), and it is using 10.0.0.42 as DNS (this is the address of my primary domain controller).

Now I can configure the linux box to become a router between the two networks, you should enable forwarding as first step with the instruction

echo 1 > /proc/sys/net/ipv4/ip_forward

But this works only until you reboot the linux machine, if you want the configuration to survive a reboot you can edit /etc/sysctl.conf and change the line that says net.ipv4.ip_forward = 0 to net.ipv4.ip_forward = 1. When forwarding is enabled, you can configure iptables to route. Here is the configuration:

Disclaimer: I’m not absolutely a linux expert, this is a simple configuration I’ve done after studying a little how iptables works and thanks to articles around the web.

sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o eth1 -m state  --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
sudo iptables -I FORWARD -d 10.0.0.0/24 -j DROP
sudo iptables -I FORWARD -d 10.0.0.42 -j ACCEPT

The first three instructions are standard rules to configure iptables to act as a router for ALL Traffic between eth0 and eth1 and vice-versa. With only the first three rules all the machines in 10.1.0.0/24 network that uses 10.1.0.1 (the linux box) as gateway can access internet as well as ALL machine in network 10.0.0.0/24 (Production Network). This is not enough for me, because with this configuration machine in Cloned TFS Environment has FULL access to production machines.

The fourth rule tells to iptables to DROP all traffic from any direction to subnet 10.0.0.0/24. This rule will completely isolates the two network, no machine can access the 10.1.0.0/24 from 10.0.0.0/24 and vice versa. Then the fifth and last rules tells iptables to ACCEPT all traffic from and to the address 10.0.0.42, my domain controller.

Thanks to iptables and a Linux box, it is really easy to create a router that selectively filter access from the two networks. This gives you the freedom to decide what machine of production environment can be accessed by cloned environment.

With this configuration I have created an isolated network that is capable of contacting ONLY my domain controller 10.0.0.42 but otherwise is COMPLETELY isolated from my production network. This allows PreProduction / Test TFS machine to join the domain and validate users, but you can safely launch build or whatever you want in any machine of the Cloned environment because all traffic to production machine, except the domain controller, is completely dropped.

How can I access PreProduction environment from client machine

Previous configuration solves only one of my two problems, PreProduction TFS can now access only selected machine of the domain (Domain Controller usually is enough) but how can you let Developers or Manager to access PreProduction environment to test cloned instance? Suppose a developer is using the 10.0.0.1 machine in production network, and he want to access PreProduction TFS at 10.1.0.2 address, how can you have it access without forcing him to connect to Hyper-V host and then use the Hyper-V console?

First of all you need to tell iptables to allow traffic between that specific ip and the isolated virtual network in eth1.

sudo iptables -I FORWARD -d 10.0.0.1 -j ACCEPT

This rules allow traffic with client ip so packets can flow from 10.0.0.1 to every machine in 10.1.0.0/24 network. This is necessary because we tell iptables to DROP every traffic to 10.0.0.0/24 except 10.0.0.42, so you need this rule to allow traffic to client Developer machine. All other machine in production network are still isolated.

Now the developer in 10.1.0.0 still can’t reach the 10.1.0.0 machine because it is in another subnet. To allow this he simply need to add a route rule in his machine. Supposing that the 10.0.0.1 machine is a standard Windows machine, here is the command line the developer need to access Cloned Environment Machines.

route ADD 10.1.0.0 MASK 255.255.255.0 10.0.0.182

Thanks to this roule the developer is telling to the system that all traffic to 10.1.0.0/24 subnet should be routed to 10.0.0.182, the address of the Ubuntu Linux machine in production environment. Now when developer try to RDP the 10.1.0.2 machine (Cloned TFS Server) all traffic is routed by the linux machine.

Final Consideration

Thanks to this configuration all machines in the 10.1.0.0/24 network, can contact and be contacted only by selected production machines, avoiding unwanted corruption of your production environment.

This gives you complete contorol on the IP addresses that can access your Cloned environment, reducing the risk of Production Environment corruption to almost zero. You can allow access to selected machine, and also you can control which client machine in your production network can access Cloned Environment.

Remember that, after a reboot, all rules in iptables will be cleared and you need to setup them again. You can configure Linux box to reload all rules upon reboot, but for this kind of environment, I prefer to have the ability to reboot Linux machine to completely reset iptables. Re-applying rules is a matter of couple of seconds.

Gian Maria.

Create a Pre-Production / Test environment for your TFS

There are a lot of legitmate reasons to create a clone of your TFS installation: verifying an upgrade, testing some customization and so on, but traditionally creating a test environment is not an easy task.

The problem is avoiding that the test installation could interfere and corrupt your production instance and since TFS is a complex product, there are a series of steps you need to do to perform this kind of operations. Thankfully with the upcoming version of TFS most of the work will be accomplished with a wizard.

Kudo to TFS Team for including a wizard experience to create a clone of your TFS Environment.

Here are the detailed steps to create a clone environment.

Step 1: Backup Database / install TFS on new Server / Restore Database

First of all you can login to your TFS server, open c:\Program Files\Microsoft Team Foundation Server 14.0\Tools and launch TfsBackup.exe to take a backup of all databases.

2016-07-09_11-16-26

Figure 1: Take a backup of your Production Database

You should only specify the name of the instance of SQL Server where you have your production databases. A wizard will start asking you to select databases to backup and the location where you want to place the backup.

2016-07-09_11-17-23

Figure 2: Choose databases to backup

The backup routine will perform a full backup.

2016-07-09_11-22-06

Figure 3: Backup is taken automatically from the routine

Next step is creating a new Virtual Machine, install Sql Server in a compatible version with TFS “15” preview (I suggest SQL 2016) then install TFS.

2016-07-09_12-20-57

Figure 4: Install TFS on the target machine

Once the installer finishes TFS “15” Configuration wizard will appear

2016-07-09_12-31-08

Figure 5: Once installer is complete the Configuration Wizard will ask you to configure the server

Now you should go to c:\Program Files\Microsoft Team Foundation Server 15.0\Tools and launch TfsRestore.Exe.

2016-07-09_12-31-42

Figure 6: TfsRestore will perform database restore

You should only choose the name of the SQL Server instance you want to use, in this example I’m creating a Pre-Production environment composed by only one machine called RMTEST. You should transfer backup file to target computer or place them in a network share accessible from the Target machine.

2016-07-09_12-33-53

Figure 7: Restore routine will prompt you for Backup Location

Once you specify the directory with the backup the wizard will automatically list all the database to restore for you.

2016-07-09_12-36-00

Figure 8: Database are restored in Sql Server

Step 2: Extra security precautions to avoid Production corruption

Now all databases are restored in the Sql Server that will be used by the Pre-Production environment and you can start TFS configuration wizard, but I’ll wait to perform extra security precautions.

You should edit the hosts file of Pre-Production machine to redirect to an inexistent IP every machine name used in Production Environment. As an example, I have Build and Release definition that will deploy software on demo machines, and I want to prevent that a build triggered on Pre-Production TFS Instance will access Production servers.

As extra security tip, I suggest you to use the hosts file trick to minimize the risk of Production Environment corruption

image

Figure 9: Editing hosts file will guarantee extra safety net against Production environment corruption

As an example, in Figure 9 I showed a typical hosts file, the Production instance is called TFS2013PreviewOneBox so I redirect this name to localhost in the new name. Then I redirect all machines used as deploy target or build server etc to 10.200.200.200 that is an unexistent IP.

You can also create some network rules to isolate the Pre-Production machine from the Production Environment completely, such as placing it in another network segment and prevent routing entirely, but using the hosts file is a simpler approach that works well for small and medium installation.

Step 3: Perform the configuration with new TFS “15” wizard

Before TFS “15” now you should resort to command line trickery to change server id from database etc, but luckily you can do everything using configuration wizard. Lets come back to Configuration Wizard and choose the option “I have existing database to use … “

2016-07-09_12-36-22

Figure 10: Start upgrade wizard using existing databases

The wizard will prompt you to choose the instance of Sql Server and the database to use.

2016-07-09_12-36-44

Figure 11: Choose database to use for the upgrade

Until now it is the standard Upgrade Wizard, but the next screen is the great news of this new installer because it will present you the option to create a Pre-Production environment.

2016-07-09_12-42-20

Figure 12: This is the new option, you can choose to create a Pre-Production Upgrade Testing

Pressing Next you will see another screen that reminds you the step that the wizard will perform to create the Clone environment. As you can see the wizard will take care of remapping connection string, changing all identifiers and remove all scheduled backup jobs.

2016-07-09_12-42-37

Figure 13: Overview of the Pre-Production scenario

Thanks to the wizard you can create a Test Clone of your Production TFS without worrying to corrupt your Production environment. The wizard will takes care of everything

Now the wizard will continue, but there is another good surprise, each screen contains suggestions to minimize risk of Production Environment corruption.

2016-07-09_12-44-40

Figure 14: Wizard suggests you to use a different user to run TFS Services

The suggestion in Figure 14 is the most important one, I usually use an account called TfsService to run my TFS Server and that account has several privileges in my network. In the Pre-Production environment it is better to use standard Network Service account or a different account. This is a really important security settings, because if the Pre-Production server will try to perform some operations on other servers it will probably be blocked because the account has no right permission.

Never use for Pre-Production environment the same users that you use for Production environment to minimize risk of corruption. Use Network Service or users with no privilege in the network created specifically for Cloned Environments

Clearly the wizard will suggest you to use different url than production server. Resist the temptation to use the same url and rely on hosts file redirection, it is really better to use a new name. This will allows you to communicate this new name to the team and ask them to access Pre-Production server to verify that everything is working, as an example after a test upgrade.

2016-07-09_12-45-02

Figure 15: Use a different url than production environment

You can now follow the wizard, basically the screen are the same of the upgrade, but each screen will suggest you to use different accounts and different resources than production instance.

At the end of the wizard you will have a perfect clone of your production environment to play with.

2016-07-09_12-56-10

Figure 16: Configuration is finished, you have now a Clone of your environment.

Step 4: extra steps for furter extra security

If you want to be Extra sure that your production environment is safe from corruption, configure the firewall of your Production system to block any access from IP of any machine part of cloned environment. This extra security measure will prevent human errors.

Some customers have custom software that connect to TFS instance to perform some custom logic. As an example you could have software that use bisubsribe.exe or hooks to listen to TFS events then send command to TFS. Suppose you want to test this kind of software against your Cloned environment, so you let people install and configure everything on Pre-Production machine, but someone did a bad mistake and configured the software to listen to the Pre-Production environment, but send command against Production Environment. If you blocked all traffic from Pre-Production machines to your TFS Production environment you will be protected against this kind of mistake.

If you are good in networking, probably the best solution is creating all machines part of Pre-Production environment (TFS, SQL, build server, etc) in another network segment, than configure routing / firewall to allow machines in pre-prod network to access only domain controllers or in general to access only machine that are stricly needed. This will prevent machines from Pre-Production environment to connect any machine of your Production Environment. You can then allow selected ip from your regular network to access Pre-Production for testing.

Gian Maria

Impressions on installing TFS “15” Preview

Microsoft released a preview of the new version of Team Foundation Server, codename TFS “15” and as usual I immediately downloaded and installed on some of my test server. I’m not going to show you full steps of installation or upgrade, because installing TFS is now a Next/Next/Next experience, but I want to highlight a couple of really interesting aspect of the new installer.

Support to Pre-Production environment

When it is time to upgrade your TFS production server, it is always a good practice to perform the upgrade in a pre-production environment, to avoid having surprises upgrading production instance. Traditionally this is a not so immediate task, because you need to backup and restore database to a test server, then run some command line instructions to change the id of the server in db and then perform the upgrade.

I’ve done a post in the past on how to create a safe clone of your TFS Environment and I suggest you to have a read of that post, but with the new version of TFS installer, when you choose the upgrade path, you have this new screen

image

Figure 1: You can choose to create a pre-production test directly from Wizard

This is a really big improvement, because now you can simply backup and restore your databases on a new server, run the wizard, choose to do a pre-production Upgrade Testing and you have a clone of you server upgraded to the new version where you can do all of your tests to verify the upgrade process.

Thanks to Pre-Production Upgrade Testing option, creating a test clone of your server where to perform the upgrade is a breeze

This option is not only useful for upgrades, but allows you to quickly create a clone of your TFS to do whatever experiment you want to do with real Production data, without harming your production server.

Great work Microsoft!!

Code search and ElasticSearch

Another interesting wizard screen is the one dedicated to installing code Search capabilities.

image 

Figure 2: Code search installing screen

Code search was introduced in VSTS lots of time ago, but now is available even on TFS installation. It uses ElasticSearch under the hood, but the installer takes care of everything for you. From Figure 2 you can see that you only need to specify a folder where to store the index, and choose if you want to enable code search for every project collection and the game is done.

Sadly enough, you are not allowed to use an exising installation of ElasticSearch, here is the error you got if you already have ES installed on the machine.

image

Figure 3: You cannot re-use an existing ES installation

You should uninstall every installation of ES you have if your machine before installing code search capabilities. Since ES depends on Java, if you do not have Java JRE or JDK installed on the machine the wizard gives you an error during the verification process.

image

Figure 4: An error is present if your machine has no Java installed

You can check the box where you accept the Oracle Binary Code License Agreement and the installer will download and install Java for you, or you can simply download and install Java manually, then re-run the Readiness Checks. If you are going to do a manual install of Java or if you have a pre-existing Java installation please check in ES site if your version is compatibile or have some problem.

Usually you have no Java in your TFS Box, so it is safe to check the checkbox and have the installer do everything for your automatically.

SSH Support

To support SSH Protocol for Git, you should tell the installer that you want to enable the SSH Service and choose the port (22 by default)

image

Figure 5: SSH Support in TFS for Git

As TFS Evolves the Configure / Upgrade Wizard is more and more complete, and allows you to istall TFS without leaving the Wizard, and configuring / installing all dependent services for you.

Gian Maria.

Create Parametrized test to allow for simpler Builds

When it is time of running unit test in a TFS or TeamCity Build, often you face the problem to run tests with options different from those one used in Developer Machine. As an example we have tons of tests that requires a MongoDb and and ElasticSearch or Solr integration.

While it is quite normal for developers to have everything installed in local dev box, it could be annoying to provide MongoDb and ElasticSearch installed on all agent machines. This approach complicates the setup of a build servers and create a situation that is less manageable.

While there is the ability to create a dedicated pool composed only by agent that have MongoDb and ElasticSearch installed, I prefer being able to run my test in all test agents, without any restriction.

The best solution is having parametrized tests, so you can execute tests with differnet parameters during the build, as an example you should parametrize connection strings.

Having parametrized tests greatly improve the ability to run test during build without creating complex requirement for agents.

In .NET environment it is quite common to use app.settings file to contain all connection strings, here it is an example taken from one of our project.

  
		
		
		
		
		
    

  

All Tests use ConfigurationManager object to access connectionstring from configuration file, and there is a single point where we specified all connection strings used by tests.

The obvious problem of storing setting in app.config is that this file is source controlled, and it is not possible to have different settings for different machine / developers.

A possible solution to this approach is using a powershell script that modifies all the configuration files in bin directories before running the test. Here is a naive approach with PowerShell.

param(
    [string] $baseMongoConnection = "mongodb://admin:xxxxxx##localhost/{0}",
    [string] $connectionQueryString = "?authSource=admin",
    [string] $configuration = "debug"
)

##Logging tests
$configFileName = "..\Logging\Jarvis.Framework.LoggingTests\bin\$configuration\Jarvis.Framework.LoggingTests.dll.config"
Write-Output "Config File Name Is: $configFileName"

$xml = (Get-Content $configFileName)
 
Edit-XmlNodes $xml -xpath "/configuration/connectionStrings/add[@name='testDb']/@connectionString" -value "$baseMongoConnection$connectionQueryString"

$xml.save($configFileName)

##main tests
$configFileName = "..\Jarvis.Framework.Tests\bin\$configuration\Jarvis.Framework.Tests.dll.config"
Write-Output "Config File Name Is: $configFileName"

$xml = (Get-Content $configFileName)
 
Edit-XmlNodes $xml -xpath "/configuration/connectionStrings/add[@name='eventstore']/@connectionString" -value ($baseMongoConnection -f "jarvis-framework-es-test" + $connectionQueryString)
Edit-XmlNodes $xml -xpath "/configuration/connectionStrings/add[@name='saga']/@connectionString" -value ($baseMongoConnection -f "jarvis-framework-saga-test" + $connectionQueryString)
Edit-XmlNodes $xml -xpath "/configuration/connectionStrings/add[@name='readmodel']/@connectionString" -value ($baseMongoConnection -f "jarvis-framework-readmodel-test" + $connectionQueryString)
Edit-XmlNodes $xml -xpath "/configuration/connectionStrings/add[@name='system']/@connectionString" -value ($baseMongoConnection -f "jarvis-framework-system-test" + $connectionQueryString)
Edit-XmlNodes $xml -xpath "/configuration/connectionStrings/add[@name='engine']/@connectionString" -value ($baseMongoConnection -f "jarvis-framework-engine-test" + $connectionQueryString)
Edit-XmlNodes $xml -xpath "/configuration/connectionStrings/add[@name='rebus']/@connectionString" -value ($baseMongoConnection -f "jarvis-rebus-test" + $connectionQueryString)

$xml.save($configFileName)

function Edit-XmlNodes {
param (
     $doc = $(throw "doc is a required parameter"),
    [string] $xpath = $(throw "xpath is a required parameter"),
    [string] $value = $(throw "value is a required parameter"),
    [bool] $condition = $true
)    
    if ($condition -eq $true) {
        $nodes = $doc.SelectNodes($xpath)
         
        foreach ($node in $nodes) {
            if ($node -ne $null) {
                if ($node.NodeType -eq "Element") {
                    $node.InnerXml = $value
                }
                else {
                    $node.Value = $value
                }
            }
        }
    }
}

Do not shoot the pianist :), this is a quick and dirty script that can edit configuration files, but this is not a good approach.

1) If a developer want to change this setting in its machine, it is really complex to instruct VS to run this script with parameter before running the test
2) It lead to unnecessary complicated builds, because you need to run this script before running the test and it introduces another point of failure.
3) It is not possibile to have agent dependant settings, I cannot specify that Agent X should run the test against mongo instance Y.

A better solution is to use Environment Variables to override app.config connection string, and create a Nunit SetupFixture that is executed before the first test.

NUnit has the ability to run a global setup that is run before the very first is run, and it is the perfect place where you can put logic to change configuration of the tests. In the following example the init script check some environment variables, then changes the connectionstring.


[SetUpFixture]
public class GlobalSetup
{
    [SetUp]
    public void ShowSomeTrace()
    {
        var overrideTestDb = Environment.GetEnvironmentVariable("TEST_MONGODB");
        if (String.IsNullOrEmpty(overrideTestDb)) return;

        var overrideTestDbQueryString = Environment.GetEnvironmentVariable("TEST_MONGODB_QUERYSTRING");
        var config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
        var connectionStringsSection = (ConnectionStringsSection)config.GetSection("connectionStrings");
        connectionStringsSection.ConnectionStrings["eventstore"].ConnectionString = overrideTestDb.TrimEnd('/') + "/jarvis-framework-es-test" + overrideTestDbQueryString;
        connectionStringsSection.ConnectionStrings["saga"].ConnectionString = overrideTestDb.TrimEnd('/') + "/jarvis-framework-saga-test" + overrideTestDbQueryString;
        connectionStringsSection.ConnectionStrings["readmodel"].ConnectionString = overrideTestDb.TrimEnd('/') + "/jarvis-framework-readmodel-test" + overrideTestDbQueryString;
        connectionStringsSection.ConnectionStrings["system"].ConnectionString = overrideTestDb.TrimEnd('/') + "/jarvis-framework-system-test" + overrideTestDbQueryString;
        connectionStringsSection.ConnectionStrings["engine"].ConnectionString = overrideTestDb.TrimEnd('/') + "/jarvis-framework-engine-test" + overrideTestDbQueryString;
        connectionStringsSection.ConnectionStrings["rebus"].ConnectionString = overrideTestDb.TrimEnd('/') + "/jarvis-rebus-test" + overrideTestDbQueryString;
        config.Save();
        ConfigurationManager.RefreshSection("connectionStrings");
    }
}

This approach has numerous advantages:

1) It works even for developers workstations, if you want to use a different connection string just populate corresponding Environment Variable and you are ready to go
2) You can simply define variables that are valid for all agents on the build definition, or setup environment variables different for each build agent, so each build agent will run tests against different Mongo instances/database.

This means that one of the best approach is parametrizing the tests with defaults that are good for developer machines, then allow override of configuration with Environment Variables to allow easy configuration for Build Agents.

Gian Maria

Running Unit Tests on different machine during TFS 2015 build

First of all I need to thanks my friend Jackob Ehn that pointed me to the right direction to create a particular build.  In this post I’ll share with you my journey to run tests on a different machine than the one that is running the build.

For some build it is interesting to have the ability to run some Unit Test (nunit in my scenario) on a machine different from that one that is running the build. There are a lot of legitimate reasons for doing this, for a project I’m working with, to run a set of test I need to have a huge amount of pre-requisites installed (LibreOffice, ghostscript, etc). Instead of installing those prerequisite on all agent machines, or install those one on a single build agent and using capabilities, I’d like to being able to run the build on any build agent, but run the test in a specific machine that had all the prerequisite installed.

Sometimes it is necessary to run tests during build on machine different from that one where the build agent is running.

The solution is quite simple, because VSTS / TFS already had all build tasks needed to solve my need and to execute tests on different machine.

The very first steps is copying all the dlls that contains tests on the target machine, this is accomplished by the Windows Machine File Copy task.

image 

Figure 1: File copy task in action

This is a really simple task, the only suggestion is to never specify the password in clear format, because everyone that can edit the build can read the password. In this situation the password is stored in the RmTestAdminPassword variable, and that variable is setup as secret.

image

Figure 2: Store sensitive information as secret variables of the build

Then we need to add a Visual Studio Test Agent Deployment task, to deploy Visual Studio Test Runner on target machine.

image

Figure 3: Visual Studio Test Agent Deployment

Configuration is straightforward, you need to specify a machine group or a list of target machines (point 1), then you should specify the user that will be used to run test agent (point 2), finally I’ve specified a custom location in my network for the Test Agent Installer. If you do not specify anything, the agent is downloaded from http://go.microsoft.com/fwlink/?LinkId=536423 but this will download approximately 130MB of data. For faster build it would be preferrable to download the agent and move the installer in a shared network folder to instruct the Task to grab the agent from that location.

Finally you use the Run Functional Tests task to actually execute tests in the target machine.

image

Figure 4: Running functional test from target machine configuration

You specify the machine(s) you want to use (point 1), then all the dll that contains tests (point 2) and you can also specify code coverage (point 3). Even if the task is called Run Functional Tests, it actually use Visual Studio Test runner to run tests, so you can run whatever test you like.

Thanks to TFS 2015 / VSTS build, we already have all tasks needed to run Unit Tests on target machines.

If you are running Nunit test or whatever test framework different from MsTest, this task will fail, because the target machine has no test adapter to run the test. The failure output tells you that the agent was not capable of finding any test to run in specified location. This happens even if you added Nuget Nunit adapter to the project. The solution is simple, first of all locates all needed dll in package location of your project.

image

Figure 5: Installing Nunit TestAdapter Nuget package, downloads all required dll to your machine

Once you located those four dll in your HD, you should copy them to the target machine in folder: C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\Extensions, this folder was created by Visual Studio Test Agent deployment task and will contain all extension that will be automatically loaded by Visual Studio Test Agent. Once you copied the dll, those machine will be able to run nunit test without problem.

Once you copied all required dll in target folder, re-run the build, and verify that tests are indeed executed on the target machine.

SNAGHTML3c4ad0

Figure 6: Output of running tests on remote machine

Test output is transferred to the build machine, and attached to the build result as usual, so you do not need anything else to visualize test result in the same way as if the test were executed by agent machine.

image

Figure 7: Output of test is included in the build output like standard Unit Tests ran by the build agent

Once tasks are in place, everything is carried over by the test agent, test results are downloaded and attached to the build results, as for standard unit tests executed on Build Agent machine.

The only drawback of this approach is that it needs some times (in my system about 30 seconds) before the test started execution in target machine, but apart this problem, you can execute tests on remote machine with little effort.

Gian Maria.