TFS 2018 is out, time to upgrade

Some days are passed, but it is good to remind you that TFS 2018 is finally out. Some people are surprised because after TFS 2015 we had TFS 2017 and we are still in 2017 and we already have version 2018, but this is the good part of the ALM tools in Microsoft, they are really shipping tons of new goodness each year :).

Release note page contains all the details about the new version, from that link you have a small 13 minute video that explain what is new in this version and as usual in the page you have a detailed list of all the news with detailed information about each of the new features.

image

I strongly suggest you to start verifying system requirements and planning for the upgrade, because, as usual, it is a good habit to install the latest bit if possible, to avoid having to do a big bang upgrade after years of not upgrading.

It is always a good practice not to skip a single major version, the upgrade process will be smoother than doing a big jump (like people migrating from TFS 2008 to 2015/2017

Apart new features, the above link informs you on all the features that are actually removed from this version, because they were deprecated in the old version. This can be an update blocker, but I strongly suggest you to start thinking to a remediation pattern, instead of being stuck forever in the 2017 version.

From removed features, Team Room is probably the least impacting, very few people are using it, and you can use Slack or other tools. Tfs Extension for SharePoint were also removed, this is also a feature that very few people will miss. The Lab Center in Microsoft Test Manager was also removed, but probably the most important missing feature is the XAML Build support. In TFS 2018 you can only use the new build introduced with TFS 2015, no excuses guys, you really need to migrate every XAML build to the new format, as soon as possible.

Happy upgrading.

Gian Maria

Configure a VSTS Linux agent with docker in minutes

It is really simple to create a build agent for VSTS that runs in Linux and is capable of building and packaging your DotNetCore project, I’ve explained everything in a previous post, but I want to remind you that, with docker, the whole process is really simple.

Anyone knows that setting up a build machine often takes time. VSTS makes it super simple to install the Agent , just download a zip, call a script to configure the agent and the game is done. But this is only one side of the story. Once the agent is up, if you fire a build, it will fail if you did not install all the tools to compile your project (.NET Framework) and often you need to install the whole Visual Studio environment because you have specific dependencies. I have also code that needs MongoDB and Sql Server to run tests against those two databases, this will usually require more manual work to setup everything.

In this situation Docker is your lifesaver, because it allowed me to setup a build agent in linux in less than one minute.

Here are the steps: first of all unit tests use an Environment Variable to grab the connection string to Mongodb, MsSql and every external service they need. This is a key part, because each build agent can setup those environment variable to point to the right server. You can think that 99% of the time the connection are something like mongodb://localhost:27017/, because the build agent usually have mongodb installed locally to speedup the tests, but you cannot be sure so it is better to leave to each agent the ability to change those variables.

With this prerequisite, I installed a simple Ubuntu machine and then install Docker . Once Docker is up and running I just fire up three Docker environment, first one is the mongo database

sudo docker run -d -p 27017:27017 --restart unless-stopped --name mongommapv1 mongo

Than, thanks to Microsoft, I can run Sql Server in linux in a container, here is the second Docker container to run MSSQL

sudo docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=my_password' -p 1433:1433 --name msssql --restart=unless-stopped -d microsoft/mssql-server-linux

This will start a container with Microsoft Sql Server, listening on standard port 1433 and with sa user and password my_password. Finally I start the docker agent for VSTS

sudo docker run \
  -e VSTS_ACCOUNT=prxm \
  -e VSTS_TOKEN=xxx\
  -e TEST_MONGODB=mongodb://172.17.0.1 \
  -e TEST_MSSQL='Server=172.17.0.1;user id=sa;password=my_password' \
  -e VSTS_AGENT='schismatrix' \
  -e VSTS_POOL=linux \
  --restart unless-stopped \
  --name vsts-agent-prxm \
  -it microsoft/vsts-agent

Thanks to the –e option I can specify any environment variable I want, this allows me to specify TEST_MSSQL and TEST_MONGODB variables for the third docker container, the VSTS Agent. The ip of mongodb and MSSql are on a special interface called docker0, that is a virtual network interfaces shared by docker containers.

image

Figure 1: configuration of docker0 interface on the host machine

Since I’ve configured the container to bridge mongo and SQL port on the same port of the host, I can access MongoDB and MSSQL directly using the docker0 interface ip address of the host. You can use docker inspect to know the exact ip of the docker container on this subnet but you can just use the ip of the host.

image

Figure 2: Connecting to mongodb instance

With just three lines of code my agent is up and running and is capable of executing build that require external databases engine to verify the code.

This is the perfect technique to spinup a new build server in minutes (except the time needed for my network to download Docker images 🙂 ) with few lines of code and on a machine that has no UI (clearly you want to do a minimum linux installation to have only the thing you need).

Gian Maria.

Pause build and clear long build queue

In VSTS / TFS Build system, you can change the status of the build, between three states: Enabled, Paused and Disabled. The Paused state is really special, because all the build trigger are still active and builds are queued, but all these queued build does not starts.

image

Figure 1: Paused build

Paused state should be used with great care, because if you forget a build in this state, you can end up with lots of queued build, as you can see in Figure 2:

image

Figure 2: Really high number of build queued, because the build definition is paused.

What happened in Figure 2 is that some user probably set the build to paused, believing that no build will be queued, after some week he want to re-enabled, but we have 172 build in queue.

Now if you are in a situation like this, you probably need to remove all queued builds before re-enable the build definition. If you set the build to active you have the risk to completely saturate your build queue. To solve this problem, just go to the queued tab of the build page.

image

Figure 3: Tab queued for the build page

From this page you can filter and show only queued build for the definition that was paused, you can then select all queued builds, and then delete all scheduling at once. Thanks to the filtering abilities present in the queued tab, you can quickly identify queued build and do massive operation on them.

Now that we deleted all the 172 queued build, we re-enabled the build without the risk of saturating build queue.

Gian Maria.

Running UAT and integration tests during a VSTS Build

There are a lots of small suggestions I’ve learned from experience when it is time to create a suite of integration / UAT test for your project. A UAT or integration test is a test that exercise the entire application, sometimes composed by several services that are collaborating to create the final result. The difference from UAT tests and Integration test, in my personal terminology, is that the UAT uses direct automation of User Interface, while an integration tests can skip the UI and exercise the system directly from public API (REST, MSMQ Commands, etc).

The typical problem

When it is time to create a solid set of such kind of tests, having them to run in in an automated build is a must, because it is really difficult for a developer to run them constantly as you do for standard Unit Tests.

Such tests are usually slow, developers cannot waste time waiting for them to run before each commit, we need to have an automated server to execute them constantly while the developer continue to work.

Those tests are UI invasive, while the UAT test runs for a web project, browser windows continues to open and close, making virtually impossible for a developer to run an entire UAT suite while continue working.

Integration tests are resource intensive, when you have 5 services, MongoDB, ElasticSearch and a test engine that fully exercise the system, there is little more resource available for doing other work, even if you have a real powerful machine.

Large sets of Integration / UAT tests are usually really difficult to run for a developer, we need to find a way to automate everything.

Creating a build to run everything in a remote machine can be done with VSTS / TFS, and here is an example.

image

Figure 1: A complete build to run integration and UAT tests.

The build is simple, the purpose is having a dedicated Virtual Machine with everything needed to run the software already in place. Then the build will copy the new version of the software in that machine, with all the integration test assemblies and finally run the tests on the remote machine.

Running Integration and UAT test is a task that is usually done in a different machine from that one running the build. This happens because that machine should be carefully prepared to run test and simplify deployment of the new version.

Phase 1 – Building everything

First of all I have the phase 1, where I compile everything. In this example we have a solution that contains all .NET code, then a project with Angular 2, so we first build the solution, then npm restore all the packages and compile the application with NG Cli, finally I publish a couple of Web Site. Those two web sites are part of the original solution, but I publish them separately with MSBuild command to have a full control on publish parameters.

In this phase I need to build every component, every service, every piece of the UI needed to run the tests. Also I need to build all the test assemblies.

Phase 2 – Pack release

In the second phase I need to pack the release, a task usually accomplished by a dedicated PowerShell script included in source code. That script knows where to look for dll, configuration files, modify configuration files etc, copying everything in a couple of folders: masters and configuration. In the masters directory we have everything is needed to run everything.

To simplify everything, the remote machine that will run the tests, is prepared to accept an XCopy deployment. This means that the remote machine is already configured to run the software in a specific folder. Every prerequisite, everything is needed by every service is prepared to run everything from a specific folder.

This phase finish with a couple of Windows File Machine copy to copy this version on the target computer.

Phase 3 – Pack UAT testing

This is really similar to Phase 2, but in this phase the pack PowerShell scripts creates a folder with all the dll of UAT and integration tests, then copy all test adapters (we use NUnit for UnitTesting). Once pack script finished, another Windows File Machine Copy task will copy all integration tests on the remote machine used for testing.

Phase 4 – Running the tests

This is a simple phase, where you use Deploy Test Agent on test machine followed by a Run Functional Test tasks. Please be sure always place a Deploy Test Agent task before EACH Run Functional Test task as described in this post that explain how to Deploy Test Agent and run Functional Tests

Conclusions

For a complex software, creating an automated build that runs UAT and integration test is not always an easy task and in my experience the most annoying problem is setting up WinRm to allow remote communication between agent machine and Test Machine. If you are in a Domain everything is usually simpler, but if for some reason the Test Machine is not in the domain, prepare yourself to have some problem before you can make the two machine talk togheter.

In a next post I’ll show you how to automate the run of UAT and Integration test in a more robust and more productive way.

Gian Maria.

Dump all environment variables during a TFS / VSTS Build

Environment variables are really important during a build, especially because all Build variables are stored as environment variables, and this imply that most of the build context is stored inside them. One of the feature I miss most, is the ability to easily visualize on the result of the build a nice list of all the values of Environment variables. We need also to be aware of the fact that tasks can change environment variables during the build, so we need to be able to decide the exact point of the build where we want variables to be dumped.

Having a list of all Environment Variables value of the agent during the build is an invaluable feature.

Before moving to writing a VSTS / TFS tasks to accomplish this, I verify how I can obtain this result with a simple Powershell task (then converting into a script will be an easy task). It turns out that the solution is really really simple, just drop a PowerShell task wherever you want in the build definition and choose to run this piece of PowerShell code.

$var = (gci env:*).GetEnumerator() | Sort-Object Name
$out = ""
Foreach ($v in $var) {$out = $out + "`t{0,-28} = {1,-28}`n" -f $v.Name, $v.Value}

write-output "dump variables on $env:BUILD_ARTIFACTSTAGINGDIRECTORY\test.md"
$fileName = "$env:BUILD_ARTIFACTSTAGINGDIRECTORY\test.md"
set-content $fileName $out

write-output "##vso[task.addattachment type=Distributedtask.Core.Summary;name=Environment Variables;]$fileName"

This script is super simple, with the gci (Get-ChildItem) cmdlets I can grab a reference to all EnvironmentVariables that are then sorted by name. Then I create a variable called $out and I iterate in all variables to fill $out variable with a markdown text that dump all variables. If you are interested, the `t syntax is used in powershell to create special char, like a tab char. Basically I’m dumping each variable in a different line that begins with tab (code formatting in markdown), aligning with 28 chars to have a nice formatting.

VSTS / TFS build system allows to upload a simple markdown file to the build detail if you want to add information to your build

Given this I create a test.md file in the artifact directory where I dump the entire content of the $out variable, and finally with the task.addattachment command I upload that file to the build.

image

Figure 1: Simple PowerShell task to execute inline script to dump Environment Variables

After the build ran, here is the detail page of the build, where you can see a nice and formatted list of all environment variables of that build.

image

Figure 2: Nice formatted list of environment variables in my build details.

Gian Maria.