Error with dotnet restore, corrupted header

I’m trying to compile with dotnetcore 2.0 a project on Linux, but I got this strange error when I run the dotnet restore command.

image

Figure 1: Error restoring packages

The exact error comes from NuGet.targets and tells me: a local file header is corrupt and points to my solution file. Clearly this project builds just fine on another computer.

Since I’m experiencing intermittent connection, I suspect that nuget cache can be somewhat corrupted, so I run this command to clear all caches.

dotnet nuget locals --clear all

This clear all the caches. After this command run, I simply re-run again the dotnet restore command and this time everything went well.

Gian Maria.

Running UAT tests in a VSTS / TFS release

I’ve blogged on how to run UAT and integration tests during a VSTS Build; that solution works quite well but probably is not the right way to proceed. Generally speaking that build does its work but I have two main concerns.

1) Executing test with remote execution requires installation of test agent and involves WinRm, a beast that is not so easy to tame outside a domain

2) I’m deploying the new version of the application with an XCopy deployment, that is different from a real deploy to production.

The second point is the one that bothers me, because we already deploy in production with PowerShell scripts and I’d like to use the very same scripts to deploy on the machine used for the UAT testing. Using the same script used for real release will put those script also under testing.

If you want to run UAT and integration testing, the best scenario is when you install the new version of the application with the very same script you use to deploy on production.

If you have a (script, whatever) to automatically release a new version of your application, it is really better to use a different strategy to run the UAT test in VSTS / TFS: instead of using a build you should use release management. If you still do not have scripts or whatever to automatically release your application, but you have UAT tests to run automatically, it is time to allocate time to automate your deployment. This is a needed prerequisite to automate running of UAT and will simplify your life.

The first step is a build that prepare the package with all the files that are needed by the installation, in my situation I have a couple of .7z files: the first contains all the binaries and the other contains all updated configurations. These are the two files that I use for deployment with PowerShell script. The script is quite simple, it stops services, backup actual version, deletes everything, replace binaries with latest version, then update configuration with the new default values if any. It is not rocket science, it is a simple script that automate everything we have on our release list.

Once you have prerequisites (build creating binaries and installation scripts), running UAT tests in a release is really simple, a simple dependency from build artifacts, a single environment and the game is done.

image

Figure 1: General schema for the release that will run UAT tests.

I’m depending by the artifact of a single build, specially crafted for UAT. To run UAT testing I need the .7z files with the new release of the software, but I need also a .7z file with all the UAT tests (nunit dll files and test adapter) needed to run the tests and all installation scripts.

To simplify everything I’ve cloned the original build that I used to create package for new release and I’ve added a couple of tasks to package UAT test files.

image

Figure 2: Package section of the build

I’ve blogged a lot in the past of my love with PowerShell scripts to create package used for release. This technique is really simple, you can test scripts outside of build management, it is super easy to integrate in whatever build engine you are using and with PowerShell you can do almost everything. In my source code I have two distinct PowerShell package script, the first creates package with the new binaries the second one creates a package with all UAT assemblies as well as NUnit tests adapters. All the installation scripts are simply included in the artifacts directly from source code.

Build for UAT produces three distinct artifacts, a compressed archive with new version to release, a compressed archive with everything needed to run UAT tests and the uncompressed folder with all installation scripts.

When the build is stable, the next step is configuring a Deployment Group to run UAT. The concept of Deployment Group is new in VSTS and allows you to specify a set of machines, called deployment group, that will be used in a release definition. Once you create a new Deployment Group you can simply go to the details page to copy a script that you can run on any machine to join it to that deployment group.

SNAGHTML52853f7

Figure 3: Script to join a machine to a Deployment Group

As you can see from Figure 3, you can join Windows machines or a Ubuntu or RedHat machines to that group. Once you run the script that machine will be listed as part of the Group as you can see in Figure 4.

image

Figure 4: Deployment groups to run UAT tests.

The concept of Deployment Group is really important, because it allows for pull deployment instead of push deployment. Instead of having an agent that will remotely configure machines, we have the machines of Deployment Group that will download artifacts of the build and runs the build locally. This deployment method will completely remove all WinRM issues, because the release scripts are executed locally.

When designing a release, a pull model allows you to run installation scripts locally and this lead to more stability of the release process.

There are another advantages of Deployment Groups, like executing in parallel to all machines of a group. This MSDN post is a good starting point to learn of all the goodness of DG.

Once the Deployment Group is working, creating a release is really simple if you already created PowerShell scripts for deployment. The whole release definition is represented in Figure 5.

SNAGHTML52522c4

Figure 5: Release definition to run UAT testing

First of all I run the installer script (it is an artifacts of the build so it is downloaded locally), then I uncompress the archive that contains UAT tests and delete the app_offline.htm files that was generated by the script to bring the IIS website offline during the installation.

Then I need to modify a special .application files that is used to point to a specific configuration set in the UAT machine. That step is used because the same machine is used to run UAT tests during a release or during a Build (with the technique discussed in previous post) so I need to run the UAT testing with two different sets of parameters.

Then I run another PowerShell script that will change the Web.config of the application to use Forms Authentication instead of Integrated authentication (we use fake users during UAT). After this steps everything is ready to run UAT tests and now I can run them using standard Visual Studio Test task, because the release script will be run locally in the machines belonging to deployment Group.

Most of the steps are peculiar to this specific application, if your application is simpler, like a simple IIS application, probably the release will be even simpler, in my situation I need to install several windows services, updating an IIS application another angular application, etc etc.

If you configure that release to start automatically as soon as new artifacts is present, you can simply trigger the build and everything will run automatically for you. Just queue the build and you will end with a nice release that contains results of your UAT tests.

image

Figure 6: Test result summary in release detail.

This technique is superior respect running UAT tests during a standard build; first of all you do not need to deal with WinRM, but the real advantage is continuously testing your installation scripts. If for some reason a release script does not work anymore, you will end with a failing release, or all UAT tests will fail because the application was not installed correctly.

The other big advantage is having the tests running locally with the standard Visual Studio Test runner, instead of dealing with remote test execution, that is slow and more error prone.

The final great advantage of this approach, is that you gain confidence of your installation scripts, because they are run constantly against your code, instead of being run only when you are actually releasing a new version.

As a final notice, Deployment Groups is a feature that, at the time I’m writing this post, is available only for VSTS and not for TFS.

Gian Maria.

.NET core 2.0, errors installing on linux

.NET Core 2.0 is finally out, and I immediately try to install it on every machine and especially in my linux test machines. In one of my Ubuntu machine I got an installation problem, a generic error of apt-get and I was a little bit puzzled on why the installation fail.

Since in windows the most common cause of .NET Core installation error is the presence of an old package (especially the preview version), I decide to uninstall all previous installation of .NET core on that machine. Luckly enough, doing this on linux is really simple, first of all I list all installed packages that have dotnet in the name

sudo apt list --installed | grep dotnet

This is what I got after a clean installation of .NET core 2.0

image

Figure 1: List of packages that contains dotnet in the name

But in that specific virtual machine I got several versions and a preview of 2.0, so I decided to uninstall every pacakge using the command sudo apt-get purge packagename, finally, after all packages were uninstalled I issued a sudo apt-get clean and finally I tried to install again .NET core 2.0 and everything went good.

If you have any installation problem of .net core under linux, just uninstall everything related with dotnet core with apt-get purge and this should fix your problems.

Gian Maria.

Running UAT and integration tests during a VSTS Build

There are a lots of small suggestions I’ve learned from experience when it is time to create a suite of integration / UAT test for your project. A UAT or integration test is a test that exercise the entire application, sometimes composed by several services that are collaborating to create the final result. The difference from UAT tests and Integration test, in my personal terminology, is that the UAT uses direct automation of User Interface, while an integration tests can skip the UI and exercise the system directly from public API (REST, MSMQ Commands, etc).

The typical problem

When it is time to create a solid set of such kind of tests, having them to run in in an automated build is a must, because it is really difficult for a developer to run them constantly as you do for standard Unit Tests.

Such tests are usually slow, developers cannot waste time waiting for them to run before each commit, we need to have an automated server to execute them constantly while the developer continue to work.

Those tests are UI invasive, while the UAT test runs for a web project, browser windows continues to open and close, making virtually impossible for a developer to run an entire UAT suite while continue working.

Integration tests are resource intensive, when you have 5 services, MongoDB, ElasticSearch and a test engine that fully exercise the system, there is little more resource available for doing other work, even if you have a real powerful machine.

Large sets of Integration / UAT tests are usually really difficult to run for a developer, we need to find a way to automate everything.

Creating a build to run everything in a remote machine can be done with VSTS / TFS, and here is an example.

image

Figure 1: A complete build to run integration and UAT tests.

The build is simple, the purpose is having a dedicated Virtual Machine with everything needed to run the software already in place. Then the build will copy the new version of the software in that machine, with all the integration test assemblies and finally run the tests on the remote machine.

Running Integration and UAT test is a task that is usually done in a different machine from that one running the build. This happens because that machine should be carefully prepared to run test and simplify deployment of the new version.

Phase 1 – Building everything

First of all I have the phase 1, where I compile everything. In this example we have a solution that contains all .NET code, then a project with Angular 2, so we first build the solution, then npm restore all the packages and compile the application with NG Cli, finally I publish a couple of Web Site. Those two web sites are part of the original solution, but I publish them separately with MSBuild command to have a full control on publish parameters.

In this phase I need to build every component, every service, every piece of the UI needed to run the tests. Also I need to build all the test assemblies.

Phase 2 – Pack release

In the second phase I need to pack the release, a task usually accomplished by a dedicated PowerShell script included in source code. That script knows where to look for dll, configuration files, modify configuration files etc, copying everything in a couple of folders: masters and configuration. In the masters directory we have everything is needed to run everything.

To simplify everything, the remote machine that will run the tests, is prepared to accept an XCopy deployment. This means that the remote machine is already configured to run the software in a specific folder. Every prerequisite, everything is needed by every service is prepared to run everything from a specific folder.

This phase finish with a couple of Windows File Machine copy to copy this version on the target computer.

Phase 3 – Pack UAT testing

This is really similar to Phase 2, but in this phase the pack PowerShell scripts creates a folder with all the dll of UAT and integration tests, then copy all test adapters (we use NUnit for UnitTesting). Once pack script finished, another Windows File Machine Copy task will copy all integration tests on the remote machine used for testing.

Phase 4 – Running the tests

This is a simple phase, where you use Deploy Test Agent on test machine followed by a Run Functional Test tasks. Please be sure always place a Deploy Test Agent task before EACH Run Functional Test task as described in this post that explain how to Deploy Test Agent and run Functional Tests

Conclusions

For a complex software, creating an automated build that runs UAT and integration test is not always an easy task and in my experience the most annoying problem is setting up WinRm to allow remote communication between agent machine and Test Machine. If you are in a Domain everything is usually simpler, but if for some reason the Test Machine is not in the domain, prepare yourself to have some problem before you can make the two machine talk togheter.

In a next post I’ll show you how to automate the run of UAT and Integration test in a more robust and more productive way.

Gian Maria.

Dump all environment variables during a TFS / VSTS Build

Environment variables are really important during a build, especially because all Build variables are stored as environment variables, and this imply that most of the build context is stored inside them. One of the feature I miss most, is the ability to easily visualize on the result of the build a nice list of all the values of Environment variables. We need also to be aware of the fact that tasks can change environment variables during the build, so we need to be able to decide the exact point of the build where we want variables to be dumped.

Having a list of all Environment Variables value of the agent during the build is an invaluable feature.

Before moving to writing a VSTS / TFS tasks to accomplish this, I verify how I can obtain this result with a simple Powershell task (then converting into a script will be an easy task). It turns out that the solution is really really simple, just drop a PowerShell task wherever you want in the build definition and choose to run this piece of PowerShell code.

$var = (gci env:*).GetEnumerator() | Sort-Object Name
$out = ""
Foreach ($v in $var) {$out = $out + "`t{0,-28} = {1,-28}`n" -f $v.Name, $v.Value}

write-output "dump variables on $env:BUILD_ARTIFACTSTAGINGDIRECTORY\test.md"
$fileName = "$env:BUILD_ARTIFACTSTAGINGDIRECTORY\test.md"
set-content $fileName $out

write-output "##vso[task.addattachment type=Distributedtask.Core.Summary;name=Environment Variables;]$fileName"

This script is super simple, with the gci (Get-ChildItem) cmdlets I can grab a reference to all EnvironmentVariables that are then sorted by name. Then I create a variable called $out and I iterate in all variables to fill $out variable with a markdown text that dump all variables. If you are interested, the `t syntax is used in powershell to create special char, like a tab char. Basically I’m dumping each variable in a different line that begins with tab (code formatting in markdown), aligning with 28 chars to have a nice formatting.

VSTS / TFS build system allows to upload a simple markdown file to the build detail if you want to add information to your build

Given this I create a test.md file in the artifact directory where I dump the entire content of the $out variable, and finally with the task.addattachment command I upload that file to the build.

image

Figure 1: Simple PowerShell task to execute inline script to dump Environment Variables

After the build ran, here is the detail page of the build, where you can see a nice and formatted list of all environment variables of that build.

image

Figure 2: Nice formatted list of environment variables in my build details.

Gian Maria.