Running UAT and integration tests during a VSTS Build

There are a lots of small suggestions I’ve learned from experience when it is time to create a suite of integration / UAT test for your project. A UAT or integration test is a test that exercise the entire application, sometimes composed by several services that are collaborating to create the final result. The difference from UAT tests and Integration test, in my personal terminology, is that the UAT uses direct automation of User Interface, while an integration tests can skip the UI and exercise the system directly from public API (REST, MSMQ Commands, etc).

The typical problem

When it is time to create a solid set of such kind of tests, having them to run in in an automated build is a must, because it is really difficult for a developer to run them constantly as you do for standard Unit Tests.

Such tests are usually slow, developers cannot waste time waiting for them to run before each commit, we need to have an automated server to execute them constantly while the developer continue to work.

Those tests are UI invasive, while the UAT test runs for a web project, browser windows continues to open and close, making virtually impossible for a developer to run an entire UAT suite while continue working.

Integration tests are resource intensive, when you have 5 services, MongoDB, ElasticSearch and a test engine that fully exercise the system, there is little more resource available for doing other work, even if you have a real powerful machine.

Large sets of Integration / UAT tests are usually really difficult to run for a developer, we need to find a way to automate everything.

Creating a build to run everything in a remote machine can be done with VSTS / TFS, and here is an example.

image

Figure 1: A complete build to run integration and UAT tests.

The build is simple, the purpose is having a dedicated Virtual Machine with everything needed to run the software already in place. Then the build will copy the new version of the software in that machine, with all the integration test assemblies and finally run the tests on the remote machine.

Running Integration and UAT test is a task that is usually done in a different machine from that one running the build. This happens because that machine should be carefully prepared to run test and simplify deployment of the new version.

Phase 1 – Building everything

First of all I have the phase 1, where I compile everything. In this example we have a solution that contains all .NET code, then a project with Angular 2, so we first build the solution, then npm restore all the packages and compile the application with NG Cli, finally I publish a couple of Web Site. Those two web sites are part of the original solution, but I publish them separately with MSBuild command to have a full control on publish parameters.

In this phase I need to build every component, every service, every piece of the UI needed to run the tests. Also I need to build all the test assemblies.

Phase 2 – Pack release

In the second phase I need to pack the release, a task usually accomplished by a dedicated PowerShell script included in source code. That script knows where to look for dll, configuration files, modify configuration files etc, copying everything in a couple of folders: masters and configuration. In the masters directory we have everything is needed to run everything.

To simplify everything, the remote machine that will run the tests, is prepared to accept an XCopy deployment. This means that the remote machine is already configured to run the software in a specific folder. Every prerequisite, everything is needed by every service is prepared to run everything from a specific folder.

This phase finish with a couple of Windows File Machine copy to copy this version on the target computer.

Phase 3 – Pack UAT testing

This is really similar to Phase 2, but in this phase the pack PowerShell scripts creates a folder with all the dll of UAT and integration tests, then copy all test adapters (we use NUnit for UnitTesting). Once pack script finished, another Windows File Machine Copy task will copy all integration tests on the remote machine used for testing.

Phase 4 – Running the tests

This is a simple phase, where you use Deploy Test Agent on test machine followed by a Run Functional Test tasks. Please be sure always place a Deploy Test Agent task before EACH Run Functional Test task as described in this post that explain how to Deploy Test Agent and run Functional Tests

Conclusions

For a complex software, creating an automated build that runs UAT and integration test is not always an easy task and in my experience the most annoying problem is setting up WinRm to allow remote communication between agent machine and Test Machine. If you are in a Domain everything is usually simpler, but if for some reason the Test Machine is not in the domain, prepare yourself to have some problem before you can make the two machine talk togheter.

In a next post I’ll show you how to automate the run of UAT and Integration test in a more robust and more productive way.

Gian Maria.

Dump all environment variables during a TFS / VSTS Build

Environment variables are really important during a build, especially because all Build variables are stored as environment variables, and this imply that most of the build context is stored inside them. One of the feature I miss most, is the ability to easily visualize on the result of the build a nice list of all the values of Environment variables. We need also to be aware of the fact that tasks can change environment variables during the build, so we need to be able to decide the exact point of the build where we want variables to be dumped.

Having a list of all Environment Variables value of the agent during the build is an invaluable feature.

Before moving to writing a VSTS / TFS tasks to accomplish this, I verify how I can obtain this result with a simple Powershell task (then converting into a script will be an easy task). It turns out that the solution is really really simple, just drop a PowerShell task wherever you want in the build definition and choose to run this piece of PowerShell code.

$var = (gci env:*).GetEnumerator() | Sort-Object Name
$out = ""
Foreach ($v in $var) {$out = $out + "`t{0,-28} = {1,-28}`n" -f $v.Name, $v.Value}

write-output "dump variables on $env:BUILD_ARTIFACTSTAGINGDIRECTORY\test.md"
$fileName = "$env:BUILD_ARTIFACTSTAGINGDIRECTORY\test.md"
set-content $fileName $out

write-output "##vso[task.addattachment type=Distributedtask.Core.Summary;name=Environment Variables;]$fileName"

This script is super simple, with the gci (Get-ChildItem) cmdlets I can grab a reference to all EnvironmentVariables that are then sorted by name. Then I create a variable called $out and I iterate in all variables to fill $out variable with a markdown text that dump all variables. If you are interested, the `t syntax is used in powershell to create special char, like a tab char. Basically I’m dumping each variable in a different line that begins with tab (code formatting in markdown), aligning with 28 chars to have a nice formatting.

VSTS / TFS build system allows to upload a simple markdown file to the build detail if you want to add information to your build

Given this I create a test.md file in the artifact directory where I dump the entire content of the $out variable, and finally with the task.addattachment command I upload that file to the build.

image

Figure 1: Simple PowerShell task to execute inline script to dump Environment Variables

After the build ran, here is the detail page of the build, where you can see a nice and formatted list of all environment variables of that build.

image

Figure 2: Nice formatted list of environment variables in my build details.

Gian Maria.

Writing a VSTS / TFS task that uses 7zip

Writing a Build / Release task for VSTS / TFS is really simple, but when you need third party software you need to be aware of license issue. As an example I have a small task that uses 7zip under the hood to compress / extract with the fantastic 7zip format.

7zip is good because, even if it uses more processing power to compress files, the result is often really smaller than a standard zip, and this is especially good for build agent that are behind a standard ADSL (300 Kbs upload speed). To create a task that uses 7zip you could simply include 7zip executable in your task, but this can lead to problem for licensing.

Whenever you include executables / libraries in your task you need to check license to verify you have the right to redistribute them.

When you have a task that depends on software that is public, even with a  GNU license, but you do not want to spend too much time to understand complex licensing and / or redistributable right, the simplest path is downloading the tool during task run. Here is the relevant parts of my script

 [Parameter(Mandatory=$false)][string] $sevenZipExe = ".\7z\7za.exe"

...

if (-not (test-path $sevenZipExe)) 
{
    Write-Output "7zip not present, download from web location"
    Invoke-WebRequest -Uri "http://www.7-zip.org/a/7za920.zip" -OutFile "$runningDirectory\7zip.zip"
    Write-Output "Unzipping $runningDirectory\7zip.zip to directory $runningDirectory\7z"
    Expand-WithFramework -zipFile "$runningDirectory\7zip.zip" -destinationFolder "$runningDirectory\7z" -quietMode $true 
} 

The solution is really simple, I allow the caller to specify where 7zip is located, then if the 7za executable is not found, I simply download it from the internet a known version and uncompress locally. Then only trick is in the function Expand-WithFramework, that is used to expand the .zip file once downloaded, using the standard classes from .NET Framework.

function Expand-WithFramework(
    [string] $zipFile,
    [string] $destinationFolder,
    [bool] $deleteOld = $true,
    [bool] $quietMode = $false
)
{
    Add-Type -AssemblyName System.IO.Compression.FileSystem
    if ((Test-Path $destinationFolder) -and $deleteOld)
    {
          Remove-Item $destinationFolder -Recurse -Force
    }
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destinationFolder)
}

This technique gives me two advantage and a disadvantage. The two advantages are, I’m not including any third party executables in my task, so I have no problem in making it public and the task is really small and simple to run, because it does not contain any executable.  The disadvantage is that the machine running the build should have internet connection to download 7zip if it is not installed locally. Since I’m working mainly with VSTS, all my build machines have internet connection so this is not a problem.

Gian Maria.

Deploy test agent and run functional test tasks

In VSTS / TFS Build there are a couple of tasks that are really useful to execute UAT or Functional tests during a build. The first one deploy the test agent remotely on a target machine while the second one runs a set of tests on that machine using the agent.

If you use multiple Run Functional Test task, please be sure that before each task there is a corresponding Deploy test agent tasks or you will get an error. Actually I have a build that run some functional tests, then I’ve added another Run Functional Test task to run a second sets of functional tests. The result is that the first run does not have a problem, while the secondo one fails with a strange error

2017-07-13T17:59:51.1964581Z DistributedTests: build location: c:\uatTest\bus
2017-07-13T17:59:51.1964581Z DistributedTests: build id: 4797
2017-07-13T17:59:51.1964581Z DistributedTests: test configuration mapping: 
2017-07-13T17:59:52.4924710Z DistributedTests: Test Run with Id 1697 Queued
2017-07-13T17:59:52.7134843Z DistributedTests: Test run '1697' is in 'InProgress' state.
2017-07-13T18:00:02.9198747Z DistributedTests: Test run '1697' is in 'Aborted' state.
2017-07-13T18:00:12.9219883Z ##[warning]DistributedTests: Test run is aborted. Logging details of the run logs.
2017-07-13T18:00:13.1270663Z ##[warning]DistributedTests: New test run created.
2017-07-13T18:00:13.1280661Z Test Run queued for Project Collection Build Service (prxm).
2017-07-13T18:00:13.1280661Z 
2017-07-13T18:00:13.1280661Z ##[warning]DistributedTests: Test discovery started.
2017-07-13T18:00:13.1280661Z ##[warning]DistributedTests: UnExpected error occured during test execution. Try again.
2017-07-13T18:00:13.1280661Z ##[warning]DistributedTests: Error : Some tests could not run because all test agents of this testrun were unreachable for long time. Ensure that all testagents are able to communicate with the server and retry again.

This happens because the remote test runner cannot be used to perform a second runs if a previous run just finished. The solution is simple, add another Deploy Test Agent task.

image

Figure 1: Add a Deploy Test Agent before any Run Functional Tests task

This solved the problem and now the build runs perfectly.

Gian Maria.

Publish a website available only in some branches with VSTS build

I have several builds that publish some web projects using standard msbuild task. Here is a sample configuration.

Simple image that shows the configuration of a Msbuild Task used to publish a web project.

Figure 1: Publishing a web site with msbuild task.

This is super simple thanks to MsBuild task and a bit of MSBuild arguments, but quite often I face an annoying problem: what about a new project that lives only on certain branches, but I need to publish in the build only if exists?

Sometimes you need to execute some task in a build only if a file exists on disk (ex: a csproj file)

Suppose this new Jarvis.Catalog.Web project exists today in a feature called feature/xyz, then the feature will be closed to develop in the future then will be merged to a release branch and finally on master branch. This poses a problem, the MsBuild task to publish the web project will fail for every branch where that specific web project still does not exists.

This is super annoying because you can set the build not to fail if this specific task failed, but this will mistakenly mark all build as partially succeeded only because that project still is not on that branch.

Thanks to the conditional execution of VSTS it is simple to configure the task to execute only on specific condition, as an example run the task only if the build is triggered against feature/xyz branch.

This image shows the conditional execution for the task, everything is explained in the text of the post)

Figure 2: Conditional execution for tasks

This is not really a good solution, because there is still the need to edit the build, including branches to include after merges. When the feature/xyz will be merged back to develop, the build should be updated to run the task in develop branch, or feature branch.

The need to edit the conditional execution of the Task after the solution is “promoted” from branch to branch is annoying and error prone.

The real solution is: run the task only if the project file exists on disk, but this is a condition that is not present in the base set. The reason is: to solve this problem correctly the build should first run a Task that check if the file is on disk, that task should set a build variable and finally the MsBuild task should execute conditionally on that build variable.

To simplify this scenario lets use the new PowerShell task, that now has the option to run a simple script defined in the build. Here is the definition of this PowerShell task, that should be placed before the MSBuild task that is going to build the project.

Simple PowerShell tasks to executing an inline script

Figure 3: Simple PowerShell tasks to executing an inline script

The script is really simple, it just uses Test-Path cmdlet to verify if the file of the project exists on disk. Here is the whole script:

# You can write your powershell scripts inline here. 
# You can also pass predefined and custom variables to this scripts using arguments
$CatalogWebExists = Test-Path src\Catalog\Jarvis.Catalog.Web\Jarvis.Catalog.Web.csproj
write-host "##vso[task.setvariable variable=CatalogWebExists;]$CatalogWebExists"
write-host "CatalogWebExists=$CatalogWebExists"

Thanks to the ##vso directive, the task can set a variable of the build called CatalogWebExists and the cool part is that that variable is created if not defined in the build definition. The above example shows you how you can create a PowerShell tasks that change the value or creates a variable in the build.

After this script ran, we have a new CatalogWebExists build variable that has the value true if the file of the project exists. This allows to a better conditional expression for the MsBuild task.

and(succeeded(), eq(variables['CatalogWebExists'], 'true'))

This condition is really what we need, because it runs the task only if the build is successful and if the CatalogWebExists variable is equal to true. Thanks to a simple PowerShell script I can conditionally executes a task with a condition determined by the script.

Thanks to inline PowerShell scripts is really simple to execute a task with a condition evaluated by a PowerShell script.

In this example I’m able to run a script if a file exists, but you are not limited to this scenario.

Gian Maria.