YAML build in VSTS

One of the most exciting feature that was recently introduced in VSTS is the ability to create YAML Build. You need to enable this feature because it is still in preview and as usual you can enable for your account from the preview feature management

image

Figure 1: Enable YAML feature for the entire account

After you enable this feature, when you create a new build you can create a build based on YAML.

image

Figure 2: Create a build based on YAML definition

A build definition based on YAML is nothing more that the standard build system, but instead of using a graphic editor in the browser, you define the task to use and the sequence of tasks directly in a yaml file. To understand the syntax for the YAML file you can directly follow the tutorial in MSDN at this address.

YAML build definition greatly enhance the build system of VSTS, it allows to directly create multiple YAMLS files in code to define builds.

This is a preview and if you want to experiment you need to browse source code of tasks directly from the code in GitHub. I’ll do a more detailed post on how to setup a real build looking up tasks, for this simple introductory post I’ll use a super simple build definition to use a custom task of mine.

image

Figure 3: Simple build definition with a simple task.

This is not a real build, but it shows how simple is to create a YAML build, just write one task after another and you are ready to go. Now I can create a YAML build and the only option is the name of the script

image

Figure 4: Simple YAML build, just choose queue, name of the build and the path of YAML definition

image

Figure 5: Queue a build and verify that your definition was run

Ok, we can agree that the overall experience is still rough, expecially because you need to lookup name and parameter of the tasks directly from code, but I really believe that this is the future. Here are the advantages that I see in this approach

YAML builds uses the very same engine of the standard build. This is one of my favorites, you have a single build engine and YAML definition is only an alternate way to define a build, so you can switch whenever you want, you can write a custom task once and run for both classic build and YAML build and so on.

Each branch have its own definition of the build. When you work with Git it is standard to have many branches and with classic builds, having a build that can work with different branch is not always easy. Thanks to conditional execution you can execute a task only if you are in a specific branch, but this is clumsy. Thanks to the YAML file, each branch can change build definition.

Infrastructure as code. From a DevOps perspective, I prefer to have everything in the code. I can agree that build is not really infrastructure, but the concept is the same. Build definitions are really part of the project and they should be placed in the code.

Moving between accounts. Thanks to Git, moving code between accounts (ex to consolidate accounts), is super easy ut moving builds between accounts is really not an easy process. Thanks to YAML definition, as soon as I move the code, I also moved the builds and this will simplify moving code between accounts.

More to come.

Gian Maria.

TFS 2018 is out, time to upgrade

Some days are passed, but it is good to remind you that TFS 2018 is finally out. Some people are surprised because after TFS 2015 we had TFS 2017 and we are still in 2017 and we already have version 2018, but this is the good part of the ALM tools in Microsoft, they are really shipping tons of new goodness each year :).

Release note page contains all the details about the new version, from that link you have a small 13 minute video that explain what is new in this version and as usual in the page you have a detailed list of all the news with detailed information about each of the new features.

image

I strongly suggest you to start verifying system requirements and planning for the upgrade, because, as usual, it is a good habit to install the latest bit if possible, to avoid having to do a big bang upgrade after years of not upgrading.

It is always a good practice not to skip a single major version, the upgrade process will be smoother than doing a big jump (like people migrating from TFS 2008 to 2015/2017

Apart new features, the above link informs you on all the features that are actually removed from this version, because they were deprecated in the old version. This can be an update blocker, but I strongly suggest you to start thinking to a remediation pattern, instead of being stuck forever in the 2017 version.

From removed features, Team Room is probably the least impacting, very few people are using it, and you can use Slack or other tools. Tfs Extension for SharePoint were also removed, this is also a feature that very few people will miss. The Lab Center in Microsoft Test Manager was also removed, but probably the most important missing feature is the XAML Build support. In TFS 2018 you can only use the new build introduced with TFS 2015, no excuses guys, you really need to migrate every XAML build to the new format, as soon as possible.

Happy upgrading.

Gian Maria

VSTS build failed test phase, but 0 tests failed

I had a strange situation where I have a build that suddenly starts signal failing tests, but actually zero test failed.

image

Figure 1: No test failed, but the test phase was marked as failed

As you can see in Figure 1, the Test step is marked failed, but actually I have not a single test failed, indeed a strange situation. To troubleshoot this problem, you need to select the failing step to verify the exact output of the task.

image

Figure 2: The output of the Test Step action is indeed test failed

As you can see in Figure 2, the VS component that executes the tests is reporting an error during execution, this imply that, even if no test is failing, something wrong happened during the test. This happens if any test adapter write to the console error during execution and it is done to signal that something went wrong.

Failing test run if there are error in output is a best practice, because you should understand what went wrong and fix it.

To troubleshoot the error you need to scroll all the output of the task, to find exactly what happened. Looking in the output I was able to find the reason why the test failed.

2017-11-17T14:43:19.6093568Z ##[error]Error: Machine Specifications Visual Studio Test Adapter – Error while executing specifications in assembly C:\A\_work\71\s\src\Intranet\Intranet.Tests\bin\Release\Machine.TestAdapter.dll – Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.

If an adapter cannot load an assembly, it will output error and the test run is marked as failed, because potentially all the tests inside that assembly were not run. This is a best practice, you do not want to have some test skipped due to load error and still have a green build.

Machine Specification test adapter tried to run test in Machine.TestAdapter.dll and this create a typeLoadException. Clearly we do not need to run test in that assembly, because TestAdapter.dll is the dll of the test adapter itself and does not contains any test. This problem indeed happens after I upgraded the nuget package of the runner, probably in the old version the TestAdapter.dll did not creates errors, but with the new version it does throw exception.

The problem lies in how the step is configured, because usually you ask to the test runner to run tests in all assemblies that contains Test in the name. The solution was to exclude that assembly from test run.

image

Figure 3: How to exclude a specific dll from the test run

And voilà, the build is now green again. The rule is, whenever the test step is failing but no test failed, most of the time is some test adapter that was not able to run.

Gian Maria.

Multitargeting in DotNetCore and Linux and Mac builds

One of the most important features of DotNetStandard is the ability to run on Linux and Mac, but if you need to use a DotNetStandard compiled library in a project that uses full .NET framework, sometimes you can have little problems. Actually you can reference a dll compiled for DotNetCore from a project that uses full Framework, but in a couple of project we experienced some trouble with some assemblies.

Thanks to multitargeting you can simply instruct DotNet compiler to produce libraries compiled against different versions of frameworks, so you can simply tell the compiler that you want both DotNetStandard 2.0 and full framework 4.6.1, you just need to modify the project file to use TargetFrameworks tag to request compilation for different framework.

netstandard2.0;net461

Et voilà, now if you run dotnet build command you will find in the output folder both the versions of the assembly.

image

Figure 1: Multiple version of the same library, compiled for different versions of the assembly.

Multitargeting allows you to produce libraries compiled for different version of the framework in a single build.

The nice aspect of MultiTargeting is that you can use dotnet pack command to request the creation of Nuget Packages: generated packages contain libraries for every version of the framework you choose.

image

Figure 2: Package published in MyGet contains both version of the framework.

The only problem of this approach is when you try to compile multitargeted project in Linux on in Macintosh, because the compiler is unable to compile for the Full Framework. that can be installed only on Windows machines. To solve this problem you should remember that .csproj files of DotNetCore projects are really similar to standard MsBuild project files so you can use conditional options. This is how I defined multitargeting in a project

image

Figure 3: Conditional multitargeting

The Condition attribute is used to instruct the compiler to consider that XML node only if the condition is true, and with Dollar syntax you can reference environment variables. The above example can be read in this way: if  DOTNETCORE_MULTITARGET environment variable is defined and equal to true, the compiler will generate netstandard2.0 and net461 libraries, otherwise (no variable defined or defined with false value) the compiler will generate only netstandard2.0 libraries.

Using the Condition attribute you can specify different target framework with an Environment Variable

All the people with Windows machines will define this variable to true and all projects that uses this syntax, automatically have the project compiled for both the framework. On the contrary, all the people that uses Linux or Macintosh can work perfectly with only netstandard2.0 version simply avoiding defining this variable.

The risk of this solution is: if you always work in Linux, you can potentially introduce code that compiles for netstandard2.0 and not for net461. Even if this situation cannot happen now, working with Linux or Mac actually does not compile and test the code against the full framework. The solution to this problem is simple, just create a build in VSTS that is executed on a Windows agent and remember to set DOTNETCORE_MULTITARGET to true, to be sure that the build will target all desired framework.

image

Figure 4: Use Build variables to setup environment variables during the build

Thanks to VSTS / TFS build system it is super easy to define the DOTNETCORE_MULTITARGET at build level, and you can decide at each build if the value is true or false (and you are able to trigger a build that publish packages only for netstandard2.0). In this build I usually automatically publish NuGet package in MyGet feed, thanks to GitVersion numbering is automatic.

image

Figure 5: Package published in pre-release.

This will publish a pre-release package at each commit, so I can test immediately. Everything is done automatically and is run in parallel with the build running in Linux, so I’m always 100% sure that the code compile both in Windows an Linux and tests are 100% green in each operating system.

Gian Maria.

Configure a VSTS Linux agent with docker in minutes

It is really simple to create a build agent for VSTS that runs in Linux and is capable of building and packaging your DotNetCore project, I’ve explained everything in a previous post, but I want to remind you that, with docker, the whole process is really simple.

Anyone knows that setting up a build machine often takes time. VSTS makes it super simple to install the Agent , just download a zip, call a script to configure the agent and the game is done. But this is only one side of the story. Once the agent is up, if you fire a build, it will fail if you did not install all the tools to compile your project (.NET Framework) and often you need to install the whole Visual Studio environment because you have specific dependencies. I have also code that needs MongoDB and Sql Server to run tests against those two databases, this will usually require more manual work to setup everything.

In this situation Docker is your lifesaver, because it allowed me to setup a build agent in linux in less than one minute.

Here are the steps: first of all unit tests use an Environment Variable to grab the connection string to Mongodb, MsSql and every external service they need. This is a key part, because each build agent can setup those environment variable to point to the right server. You can think that 99% of the time the connection are something like mongodb://localhost:27017/, because the build agent usually have mongodb installed locally to speedup the tests, but you cannot be sure so it is better to leave to each agent the ability to change those variables.

With this prerequisite, I installed a simple Ubuntu machine and then install Docker . Once Docker is up and running I just fire up three Docker environment, first one is the mongo database

sudo docker run -d -p 27017:27017 --restart unless-stopped --name mongommapv1 mongo

Than, thanks to Microsoft, I can run Sql Server in linux in a container, here is the second Docker container to run MSSQL

sudo docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=my_password' -p 1433:1433 --name msssql --restart=unless-stopped -d microsoft/mssql-server-linux

This will start a container with Microsoft Sql Server, listening on standard port 1433 and with sa user and password my_password. Finally I start the docker agent for VSTS

sudo docker run \
  -e VSTS_ACCOUNT=prxm \
  -e VSTS_TOKEN=xxx\
  -e TEST_MONGODB=mongodb://172.17.0.1 \
  -e TEST_MSSQL='Server=172.17.0.1;user id=sa;password=my_password' \
  -e VSTS_AGENT='schismatrix' \
  -e VSTS_POOL=linux \
  --restart unless-stopped \
  --name vsts-agent-prxm \
  -it microsoft/vsts-agent

Thanks to the –e option I can specify any environment variable I want, this allows me to specify TEST_MSSQL and TEST_MONGODB variables for the third docker container, the VSTS Agent. The ip of mongodb and MSSql are on a special interface called docker0, that is a virtual network interfaces shared by docker containers.

image

Figure 1: configuration of docker0 interface on the host machine

Since I’ve configured the container to bridge mongo and SQL port on the same port of the host, I can access MongoDB and MSSQL directly using the docker0 interface ip address of the host. You can use docker inspect to know the exact ip of the docker container on this subnet but you can just use the ip of the host.

image

Figure 2: Connecting to mongodb instance

With just three lines of code my agent is up and running and is capable of executing build that require external databases engine to verify the code.

This is the perfect technique to spinup a new build server in minutes (except the time needed for my network to download Docker images 🙂 ) with few lines of code and on a machine that has no UI (clearly you want to do a minimum linux installation to have only the thing you need).

Gian Maria.