Use the right Azure Service Endpoint in build vNext

Build vNext has a task dedicated to uploading files in azure blob, as you can see from Figure 1:

Sample build vNext that has an Azure File Copy task configured.

Figure 1: Azure File Copy task configured in a vNext build

The nice parte is the Azure Subscription setting, that allows to choose one of the Azure endpoint configured for the project. Using service endpoint, you can ask to the person that has password/keys for Azure Account to configure an endpoint. Once it is configured it can be used by team members with sufficient right to access it, without requiring them to know password or token or whatever else.

Thanks to Services Endpoints you can allow member of the team to create builds that can interact with Azure Accounts without giving them any password or token

If you look around you find a nice blog post that explain how to connect your VSTS account using a service principal.

SAmple of configuration of an Endpoint for Azure with Service Principal

Figure 2: Configure a service endpoint for Azure with Service Principal Authentication

Another really interesting aspect of Service Endpoints, is the ability to choose people that can administer the account and people that can use the endpoint, thus giving you full security on who can do what.

Each Service endpoint has its security setting to specify people that can administer or read the endpoint

Figure 3: You can manage security for each Service Endpoint configured

Finally, using Service Endpoint you have a centralized way to manage accessing your Azure Subscription Resources, if for some reason a subscription should be removed and not used anymore, you can simply remove the endpoint. This is a better approach than having data and password or token scattered all over the VSTS account (builds, etc).

I’ve followed all the steps in the article to connect your VSTS account using a service principal, but when it is time to execute the Azure File Copy action, I got a strange error.

Executing the powershell script: C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\agents\default\tasks\AzureFileCopy\1.0.25\AzureFileCopy.ps1
Looking for Azure PowerShell module at C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\Azure.psd1
AzurePSCmdletsVersion= 0.9.8.1
Get-ServiceEndpoint -Name 75a5dd41-27eb-493a-a4fb-xxxxxxxxxxxx -Context Microsoft.TeamFoundation.DistributedTask.Agent.Worker.Common.TaskContext
tenantId= ********
azureSubscriptionId= xxxxxxxx-xxxxxxx-xxxx-xxxx-xxxxxxxxxx
azureSubscriptionName= MSDN Principal
Add-AzureAccount -ServicePrincipal -Tenant $tenantId -Credential $psCredential
There is no subscription associated with account ********.
Select-AzureSubscription -SubscriptionId xxxxxxxx-xxxxxxx-xxxx-xxxx-xxxxxxxxxx
The subscription id xxxxxxxx-xxxxxxx-xxxx-xxxx-xxxxxxxxxx doesn't exist.
Parameter name: id
The Switch-AzureMode cmdlet is deprecated and will be removed in a future release.
The Switch-AzureMode cmdlet is deprecated and will be removed in a future release.
Storage account: portalvhdsxxxxxxxxxxxxxxxxx1 not found. Please specify existing storage account

This error is really strange because the first error line told me:

The subscription id xxxxxx-xxxxxx-xxxxxx-xxxxxxxxxxxxx doesn’t exist.

This cannot be the real error, because I’m really sure that my Azure Subscription is active and it is working everywere else. Thanks to the help of Roopesh Nair, I was able to find my mistake. It turns out that the Storage Account I’m trying to access is an old one created with Azure Classic Mode, and it is not accessible with Service Principal. A Service Endpoint using Service Principal can manage only Azure Resource Manager based entities.

Shame on me :) because I was aware of this limitation, but for some reason I completely forgot it this time.

Another sign of the problem is the error line telling me: Storage account xxxxxxxxx not found, that should ring a warning bell about the script not being able to find that specific resource, because it is created with classic mode.

The solution is simple, I could use a Blob Storage created with Azure Resource Manager, or I can configure another Service Endpoint, this time based on a management certificate. The second option is preferrable, because having two Service Endpoint, one configured with Service Principal and the other configured with Certificate allows me to manage all type of Azure Resources.

Configure an endpoint with certificate is really simple, you should only copy data from the management certificate inside the Endpoint Configuration and you are ready to go.

Configuration of an endpoint based on Certificate

Figure 4: Configure an Endpoint based on Certificate

Now my build task Azure File Copy works as expected and I can choose the right Service Endpoint based on what type of resource I should access (Classic or ARM)

Gian Maria

Save a build as a Draft

There are a lots of interesting new features in TFS / VSTS Build vNext, but surely, one of the coolest one is the ability to edit a build and save as a draft. Actually available only in the online version (Visual Studio Team Services)

Saving build as a draft

Figure 1: Saving a build as a Draft

Actually, saving build as a draft allows you to edit a build, try a new configuration / task / personalization, without distrupt the old build that works. Customizing a build can be a difficult task, and the greatest risk with older build System is having an unusable build until the new personalization is done.

Another usual technique is temporary disable tasks to reduce the time to finish the build and verify if your new customization works. Suppose you added a last task to manage artifacts publishing, you want to verify that everything works, and you disable running Unit Tests, so you can finish build faster and have a faster feedback. If you do this with the real build, until customization is not completed, all queued build will have unit test disabled.

The main problem when you edit a build, is disrupting continuous integration until your work is finished.

With the ability to save as a Draft you can avoid this type of disruption. Once you’ve saved a build as a draft, you can queue the draft, verify the outcome, and when everything works as expected, you can publish it, effectively updating the real build only when you’ve tested all modifications and you are sure that the new definition does what you really want.

Build result of Drafts build have a .DRAFT suffix to distinguish from a standard build output

Figure 2: Build result of Drafts build have a .DRAFT suffix to distinguish from a standard build output.

The net effect is: you are able to test modification in isolation, without distrupting the original working build definition. Combine this with the ability to quickly spin an agent in your machine and you will have a really pleasant build definition update experience.

1. Quick configure an agent into your local machine
2. Try your personalization and save as a Draft
3. Queue draft on your agent that is immediately able to execute the build
4. Once everything is ok, publish the build

Gian Maria.

Integrating GitVersion and Gitflow in your vNext Build

In previous article I’ve showed how to create a VSO build vNext to automatically publish a nuget package to Myget (or nuget) during a build. [Publishing a Nuget package to Nuget/Myget with VSO Build vNext]. Now it is time to create a more interesting build that automatically version your assemblies and nuget packages based on GitFlow.

GitFlow and GitVersion

GitFlow is a simple convention to manage your branches in your Git repository to support a production branch, a developement branch and Feature/Support/Release/hotfix branches. If you are completely new on the subject you can find information at the following locations:

You can find also a nice plugin for Visual Studio that will support GitFlow directly from Visual Studio IDE and also install GitFlow for your command line environment in one simple click. [VS 2015 version] [VS 2013 version]. Once you get accustomed with GitFlow, next step is having a look at Semantic Versioning,  a simple versioning scheme to manage your packages and assemblies.

The really good news, is that a free tool called GitVersion exists to do semantic versioning simply examining your git history, branches and tags. I strongly suggest you to read documentation for GitVersion online, but if you want a quick start, in this blog post I’ll show you how you can integrate with a vNext VSO build.

Thanks to GitVersion tool you can easily manage SemVer in a build vNext with little effort.

How GitVersion works at a basic level

GitVersion can be downloaded in the root folder of your repository; once it is there invokeing directly from command line with /ShowConfig parameter will generate a default configuration file.

GitVersion /ShowConfig > GitVersionConfig.yaml

This will create a default configuration file for GitVersion in the root directory called GitVersionConfig.yaml. Having a configuration file is completely optional, because GitVersion can work with default options, but it is really useful to explicit default parameter to know how Semantic Versioning is handled by the tool.

I’m not going throught the various options of the tool, you can read the doc online and I’ll blog in future post about a couple of options I usually change from default.

For the scope of this article, everything I need to know is that, invoking gitversion.exe without parameters in a folder where you have a git repository with gitflow enabled will return you a Json data. Here is a possible example:

{
  "Major":1,
  "Minor":5,
  "Patch":"0",
  "PreReleaseTag":"unstable.9",
  "PreReleaseTagWithDash":"-unstable.9",
  "BuildMetaData":"",
  "BuildMetaDataPadded":"",
  "FullBuildMetaData":"Branch.develop.Sha.8ecde89ef5b97eabcf6e0035119643334ba40c4e",
  "MajorMinorPatch":"1.5.0",
  "SemVer":"1.5.0-unstable.9",
  "LegacySemVer":"1.5.0-unstable9",
  "LegacySemVerPadded":"1.5.0-unstable0009",
  "AssemblySemVer":"1.5.0.0",
  "FullSemVer":"1.5.0-unstable.9",
  "InformationalVersion":"1.5.0-unstable.9+Branch.develop.Sha.8ecde89ef5b97eabcf6e0035119643334ba40c4e",
  "BranchName":"develop",
  "Sha":"8ecde89ef5b97eabcf6e0035119643334ba40c4e",
  "NuGetVersionV2":"1.5.0-unstable0009",
  "NuGetVersion":"1.5.0-unstable0009",
  "CommitDate":"2015-10-17"
}

This is the result of invoking GitVersion in develop branch; and now it is time to understand how these version numbers are determined.  My Master Branch is actually tagged 1.4.1 and since develop will be the next version, GitVersion automatically increments Minor versioning number (this is the default and can be configured). FullSemVer number contains the suffix unstable.9 because develop branch is usually unstable, and it is 9 commits behind the master. This will immediately gives me an idea on how much work is accumulating.

Now if I start a release 1.5.0 using command git flow release start 1.5.0  a new release/1.5.0 branch is created, and running GitVersion in that branch returns a FullSemVer of 1.5.0.0-beta0. The suffix is beta, because a release branch is something that will be released (so it is a beta) and 0 means that it is 0 commits behind develop branch. If you continue to push on release branch, the last number continues to increment.

Finally when you finish the release, the release branch is merged with master, master will be tagged 1.5.0 and finally release branch is merged back to develop. Now running GitVersion on develop returns version 1.6.0-unstable.xxx because now master is on 1.5.x version and develop will be the next version.

How you can use GitVersion on build vNext

You can read about how to integrate GitVersion with build vNext directly on GitVersion documentation, but I want to show you a slightly different approach in this article. The way I use GitVersion is, directly invoking in a Powershell build file that takes care of everything about versioning.

The main reason I choose this approach is: GitVersion can store all the information about versioning in environment variables, but in build vNext environment variables are not maintained by default between various steps. The second reason is: I already have a bulid that publish on nuget with build number specified as build variable, so I’d like to grab version numbers in my script and use it to change variable value of my build.

Thanks to PowerShell, parsing Json output is super easy, here is the simple instructions I use to invoke GitVersion and parse all json output directly into a PowerShell variable.

$Output = & ..\GitVersion\Gitversion.exe /nofetch | Out-String
$version = $output | ConvertFrom-Json

Parsing output of GitVersion inside a PowerShell variable gives you great flexibility on how to use all resulting numbers from GitVersion

Then I want to version my assemblies with versions returned by GitVersion. I starts creating some Powershell variables with all the number I need.

$assemblyVersion = $version.AssemblySemver
$assemblyFileVersion = $version.AssemblySemver
$assemblyInformationalVersion = ($version.SemVer + "/" + $version.Sha)

I’ll use the same PowerShell script I’ve described in this post to version assemblies, but this time all the versioning burden is taken by GitVersion. As you can see I’m using also the AssemblyInformationalVersion attribute that can be set as any string you want. This will give me a nice file version visible from Windows.

image

Figure 1: Versioning of the file visible in windows.

This immediately tells me: this is a beta version and gives me also the SHA1 of the commit used to create the DLL, maximum traceability with minimum effort. Now is time to use some build vNext commands to version nuget.

How Build vNext can accepts command from powershell

Build vNext infrastructure can accept commands from PowerShell script looking at the output of the script, as described in this page.

One of the coolest feature of build vNext is the ability to accept commands from console output of any script language

Write-Output in powershell is all that I need to send commands to build vNext engine. Here is how I change some build variables:

Write-Output ("##vso[task.setvariable variable=NugetVersion;]" + $version.NugetVersionV2)
Write-Output ("##vso[task.setvariable variable=AssemblyVersion;]" + $assemblyVersion)
Write-Output ("##vso[task.setvariable variable=FileInfoVersion;]" + $assemblyFileVersion)
Write-Output ("##vso[task.setvariable variable=AssemblyInformationalVersion;]" + $assemblyInformationalVersion)

If you remember my previous post on publishing Nuget Packages, you can see that I’ve used the NugetVersion variable in Build Definition to specify version number of nuget package. With the first line of previous snippet I’m automatically changing that number to the NugetVersionV2 returned by GitVersion.exe. This is everything I need to version my package with SemVer.

Finally I can use one of these two instructions to change the name of the build.

Write-Output ("##vso[task.setvariable variable=build.buildnumber;]" + $version.FullSemVer)
Write-Output ("##vso[build.updatebuildnumber]" + $version.FullSemVer)

The result of these two instructions is quite the same, the first one change build number and change also the variable build.buildnumber, while the second one only changes the number of the build, without changing the value of build.buildnumber variable.

Final result

My favourite result is: my build numbers now have a real meaning for me, instead of simply representing a date and an incremental build like 20150101.2.

image

Figure 2: Resulting builds with SemVer script

Now each build name immediately tells me the branch used to create the build, and the version of the code used. As you can see, release and master branches are in continuous deployment, because at each push the build is triggered and nuget package is automatically  published. For Develop branch the build is manual, and is done only when I want to publish a package with unstable version.

I can verify that everything is ok on MyGet/Nuget side, packages were published with the correct numbers.

image

Figure 3: SemVer is correctly applied on NuGet packages

Thanks to GitVersion I can automatically version: build number, all the assemblies and NuGet package version with few lines of powershell.

Conclusions

Thanks to build vNext easy of configuration plus powershell scripts and simple command model with script output, with few lines of code you are able to use Semantic Versioning in your builds.

This example shows you some of the many advantages you have with the new build system in VSO/TFS.

Gian Maria.

Publishing a Nuget package to Nuget/Myget with VSO Build vNext

Publishing a package to myget or nuget with a TFS/VSO vNext build is a breeze. First of all you should create a .nuspec file that specify everything about your package and include it in your source control. Then Add a variable to the build called NugetVersion as shown in Figure 1.

Adding NugetVersion variable to the list of variables for this build.

Figure 1: Added NugetVersion variable to build definition.

In this build I disabled continuous integration, because I want to publish my package only when I decided that the code is good enough to be published. Publishing to a feed for each build is usually a waste of resources and a nice way to make the history of your package a pain. Since I want to do manual publishing I’ve checked the “Allow at Queue Time” checkbox, to be able to change Nuget Version Number at queue time.

Build vNext has a dedicated step called NugetPackager that takes care of creating your package from nuspec file, so you do not need to include nuget.exe in your repository or in the server. If you are curious where is nuget.exe stored, you can check installation folder of your build agent, and browse the Task Directory where all the tasks are contained. There you should find the NugetPackager folder where all the script used by the tasks are stored.

How to configure Nuget Packager to create your package.

Figure 2: Added Nuget Packager step to my build.

You can use wildchars as pattern to nuspec files; as an example you can specify **\*.nuspec to create a package for all nuspec file in your source directory. In this example I’have multiple nuspec in my repository, but I want to deploy only a specific package during this build, so I’ve decided to specify a single file to use. Thanks to the small button with ellipsis at the right of the textbox, you can choose the file browsing the repository.

Thanks to source browsing you can easily choose your nuspec file to create package.

Figure 3: Browsing source files to choose nuspec file to use.

Then I’ve choose $(Build.StagingDirectory) as Package folder, to be sure that resulting nupkg file will be created in staging directory, outside of the src folder. This is important, because if you do not choose to clean src folder before each build, you will end with multiple nupkg file in your agent work directory, one for each version you published in the past. If you use StagingDirectory as destination for your nupkg files, it will be automatically cleared before each build. With this configuration you are sure that staging directory contains only .nupkg files created by current build.

Finally in the Advanced tab I’ve used the Nuget Arguments textbox to specify the -version option to force using version specified in the $(NugetVersion) build parameter.

The last step is including a step of type Nuget Publisher, that will be used to publish your package to nuget / Myget.

Configuration of NugetPublisher step to publish your package to your feed

Figure 4: Final publishing step to publish nuget to your feed.

If you use Staging Directory as output folder for your Nuget Package step, you can specify a pattern of $(build.stagingDirectory)\*.nupkg to automatically publish all packages created in previous steps. If you will change the build in the future adding other NugetPackager steps to create other packages, you can use this single Nuget Publisher to automatically publish every .nupkg file found in staging directory.

Finally you need to specify the Nuget Server Endpoint; probably your combobox is empty, so you need to click the Manage link at the right of the combo to manage your endpoint.

Manage endpoint in your VSO account

Figure 5: Managing endpoint

Clicking Manage link, a new tab is opened in the service tab of Collection configuration, here you can add endpoint to connect your VSO account to other service. Since Nuget or MyGet is not in the list, you should add a new service endpoint of type Generic.

Specify your server url and your api key to create an endopint

Figure 6: Adding endpoint for nuget or myget server

You must specify the server url of your feed and your API KEY in the Password/Token Key field of the endpoint. Once you press OK the endpoint is created; no one will be able to read the API KEY from the configuration and your key is secured in VSO.

Now all Project Administrators can use this endpoint in your Nuget Publisher step to publish against that feed, without giving them API KEY or password. All endpoints have specific security so you can specify the list of the users that will be able to change that specific endpoint or list of users that will be only able to Read that specific endpoint. This is a nice way to save details of your nuget feed in VSO, specifying the list of the user that can use this feed, without giving password or token to anyone.

When everything is done, you can simply queue a new build, and choose the version number you want to assign to your Nuget Package.

You can queue the build specifying branch and Nuget Numbering

Figure 7: Queuing a build to publish your package with a specific number.

You have the ability to choose the branch you want to publish, as well as the Number of your nuget package to use. Once the build is finished your package should be published.

Feed detail in MyGet account correctly list packages published by my vNext build

Figure 8: Your package is published in your MyGet feed.

In previous example I’ve used master branch and published version number 1.3.1. Suppose you want to publish a pre-release package with new features that are not still stable. These features are usually in develop branch (especially true if you use GitFlow with git repositories), and thanks to configuration you can simply queue a new build to publish pre-release package.

Specifing developing branch and a package number ending with beta1 you can publish pre-release packages.

Figure 9: Publish a pre-release package using develop branch and a nuget version that has a –beta1 suffix.

I’ve specified to use develop branch and a nuget version number ending with –beta1, to specify that it is a pre-release package. When the build is finished you can check from your visual studio that everything is ok.

Verify that in Visual Studio stable and Pre-Release package is ok.

Figure 10: Verify in Visual Studio that everything is ok.

Thanks to Build vNext, publishing your package to myget or nuget or private nuget feed is just a matter of including a couple of steps and filling few textboxes.

Gian Maria.

Manage Artifacts with TFS Build vNext

Artifacts and Build vNext

Another big improvement of Build vNext in TFS and VSO is the ability to explicitly manage the content of artifacts during a build. With the term Artifacts in Continuous Integration we are referring to every result of of the build that is worth publishing together with build result, to be further consumed by consumers of the build. Generally speaking think to artifacts as build binary outputs.

The XAML build system does not give you much flexibility, it just use a folder in the build agent to store everything, then upload everything to the server or copy to a network share.

To handle artifacts, vNext build system introduces a dedicated task called: Publish Build Artifacts.

Publish Build Artifacts options

Figure 1: Publish artifacts task

The first nice aspect is that we can add as many Publish Build Artifacts task we want. Each task requires you to specify contents to include with a  default value (for Visual Studio Build) of **\bin to include everything contained in directories called bin. This is an acceptable default to include binary output of all projects, and you can change to include everything you want. Another important option is the ArtifactName, used to distinguish this artifacts from the othter ones. Remember that you can include multiple Publish Build Artifacts tasks and Artifact Name is a simple way to categorize what you want to publish. Finally you need to specify if the artifact type is Server (content will be uploaded to TFS) or File Share (you will specify a standard UNC share path where the build will copy artifacts).

Artifacts browser

With a standard configuration as represented in Figure 1, after a build is completed, we can go to the artifacts tab, and you should see an entry for each Publish Build Artifacts task included in the build.

List of artifacts included in build output.

Figure 2: Build details lists all artifacts produced by the build

You can easily download all the content of the folder as a single zip, but you can also press button Explore to explore content of the artifacts container directly from web browser. You can easily use Artifacts Explorer to locate the content you are interested into and download with a single click.

With artifacts browser you can explore content of an artifacts directly from browser and download single contents.

Figure 3: Browsing content of an artifact

Using multiple Artifacts Task

In this specific example, using **\bin approach is not probably suggested approach. As you can see from previous image, we are including binaries from test projects, wasting space on server and making more complex for the consumer to find what he/she needs.

In this specific situation we are interested in publishing two distinct series of artifacts, an host program and a client dll to use the host. In this scenario the best approach is using two distinct publish artifacts task, one for the client and the other for the host. If I reconfigure the build using two task and configure Contents parameter to include only the folder of the project I need, the result is much better.

Multiple artifacts included in build output

Figure 4: Multiple artifacts for a single build output

As you can see from previous image, using multiple task for publishing artifacts produces an improved organization of artifacts. In such a situation it is simple to immediately locate what you need and download only client or host program. The only drawback is that we still miss a “download all” link to download all artifacts.

Prepare everything with a Powershell Script approach

If projects starts to become really complex, organizing artifacts can start to become a complex task. In our situation the approach of including the whole bin folder for a project is not really good, what I need is folder manipulation before publishing artifacts.

  • We want to remove all .xml files
  • We want to change some settings in the host configuration file
  • We need to copy content from other folders of source control

In such a scenario, Publish Artifacts task does not fulfill our requirement and the obvious solution is adding a Powershell Script in your source code to prepare what you are calling a “release” of artifacts. A real nice stuff about PowerShell is that you can create a ps1 file with the function that does what you need and declare named parameters

Param
(
    [String] $Configuration,
    [String] $DestinationDir = "",
    [Bool] $DeleteOriginalAfterZip = $true
)

In my script I accepts three parameters, the configuration I want to release (Debug or Release), destination directory where the script will copy all the file, and finally if you want the script to delete all uncompressed files in DestinationDirectory.

The third option is needed because I’d like to use 7zip to compress files in output directory directly from my script. The two main reason to do this are

  • 7zip is a better compressor than a simple zip
  • It is simpler to create pre-zipped artifacts

Using Powershell script has also the great advantage that it can be launched manually to verify that everything goes as expected or to create artifact with the exact same layout of a standard build, an aspect that should not be underestimated. Once the script is tested on a local machine (an easy task) I have two files in my output directory.

Content of the folder generated by PowerShell script

Figure 5: Content of the output folder after PowerShell script ran

One of the biggest advantage in using PowerShell scripts, is the ability to launch it locally to verify that everything works as expected, instead of standard “modify”, “launch the build”, “verify” approach needed if you use Build Tasks.

Now I customize the build to use this script to prepare my release, instead of relying on some obscure and hard to maintain string in Publish Artifact Task.

Include a Powershell Task in the build to prepare artifacts folder

Figure 6: PowerShell task can launch my script and prepare artifacts directory

Thanks to parameters I can easily specify: current configuration I’m building (release, debug), DestinationDir (I’m using the $(build.stagingDirectory) variable that contains the staging directory for the build). You can use whatever destination directory you want, but using standard folder is probably the best option.

After this script you can now place a standard Publish Build Artifacts task, specifying $(build.stagingDirectory) as the Copy Root folder, and filtering content if you need. Here is the actual configuration.

Publish build artifacts taks can be used to publish Powershell output

Figure 7: Include single Publish Build Artifacts to publish from directory prepared by PowerShell script

The only drawback of this approach is that we are forced to give an Artifact Name that will be used to contain files, you cannot directly publish pre-zipped file in the root source of build artifacts. If you want you can include multiple Publish Build artifacts to publish each zipped file with a different Artifact Name.

Build artifacts contains a single artifacts with all zipped file

Figure 8: Output of the build

But even if this can be a limitation, sometimes can be the best option instead. As you can see from previous image, I have a primary artifact and you can press the Download button to Download Everything with a  single click. Using Artifact Explorer you can download separate packages, and this is probably the best approach.

Artifacts browser permits you to download single zip files

Figure 9: Artifact browser shows distinct zip file in the output

If you use a Script to create one separate pre-compressed package for each separate artifacts, your publish experience will probably be better than any other approach.

Conclusions

Build vNext gives us great flexibility on what to publish as artifacts, but even if we can manage everything with dedicated task, if you want a good organization of your artifacts, using a PowerShell script to organize everything and pre-compressing in single files is usually the best approach.

Gian Maria.