Manage Artifacts with TFS Build vNext

Artifacts and Build vNext

Another big improvement of Build vNext in TFS and VSO is the ability to explicitly manage the content of artifacts during a build. With the term Artifacts in Continuous Integration we are referring to every result of of the build that is worth publishing together with build result, to be further consumed by consumers of the build. Generally speaking think to artifacts as build binary outputs.

The XAML build system does not give you much flexibility, it just use a folder in the build agent to store everything, then upload everything to the server or copy to a network share.

To handle artifacts, vNext build system introduces a dedicated task called: Publish Build Artifacts.

Publish Build Artifacts options

Figure 1: Publish artifacts task

The first nice aspect is that we can add as many Publish Build Artifacts task we want. Each task requires you to specify contents to include with a  default value (for Visual Studio Build) of **\bin to include everything contained in directories called bin. This is an acceptable default to include binary output of all projects, and you can change to include everything you want. Another important option is the ArtifactName, used to distinguish this artifacts from the othter ones. Remember that you can include multiple Publish Build Artifacts tasks and Artifact Name is a simple way to categorize what you want to publish. Finally you need to specify if the artifact type is Server (content will be uploaded to TFS) or File Share (you will specify a standard UNC share path where the build will copy artifacts).

Artifacts browser

With a standard configuration as represented in Figure 1, after a build is completed, we can go to the artifacts tab, and you should see an entry for each Publish Build Artifacts task included in the build.

List of artifacts included in build output.

Figure 2: Build details lists all artifacts produced by the build

You can easily download all the content of the folder as a single zip, but you can also press button Explore to explore content of the artifacts container directly from web browser. You can easily use Artifacts Explorer to locate the content you are interested into and download with a single click.

With artifacts browser you can explore content of an artifacts directly from browser and download single contents.

Figure 3: Browsing content of an artifact

Using multiple Artifacts Task

In this specific example, using **\bin approach is not probably suggested approach. As you can see from previous image, we are including binaries from test projects, wasting space on server and making more complex for the consumer to find what he/she needs.

In this specific situation we are interested in publishing two distinct series of artifacts, an host program and a client dll to use the host. In this scenario the best approach is using two distinct publish artifacts task, one for the client and the other for the host. If I reconfigure the build using two task and configure Contents parameter to include only the folder of the project I need, the result is much better.

Multiple artifacts included in build output

Figure 4: Multiple artifacts for a single build output

As you can see from previous image, using multiple task for publishing artifacts produces an improved organization of artifacts. In such a situation it is simple to immediately locate what you need and download only client or host program. The only drawback is that we still miss a “download all” link to download all artifacts.

Prepare everything with a Powershell Script approach

If projects starts to become really complex, organizing artifacts can start to become a complex task. In our situation the approach of including the whole bin folder for a project is not really good, what I need is folder manipulation before publishing artifacts.

  • We want to remove all .xml files
  • We want to change some settings in the host configuration file
  • We need to copy content from other folders of source control

In such a scenario, Publish Artifacts task does not fulfill our requirement and the obvious solution is adding a Powershell Script in your source code to prepare what you are calling a “release” of artifacts. A real nice stuff about PowerShell is that you can create a ps1 file with the function that does what you need and declare named parameters

Param
(
    [String] $Configuration,
    [String] $DestinationDir = "",
    [Bool] $DeleteOriginalAfterZip = $true
)

In my script I accepts three parameters, the configuration I want to release (Debug or Release), destination directory where the script will copy all the file, and finally if you want the script to delete all uncompressed files in DestinationDirectory.

The third option is needed because I’d like to use 7zip to compress files in output directory directly from my script. The two main reason to do this are

  • 7zip is a better compressor than a simple zip
  • It is simpler to create pre-zipped artifacts

Using Powershell script has also the great advantage that it can be launched manually to verify that everything goes as expected or to create artifact with the exact same layout of a standard build, an aspect that should not be underestimated. Once the script is tested on a local machine (an easy task) I have two files in my output directory.

Content of the folder generated by PowerShell script

Figure 5: Content of the output folder after PowerShell script ran

One of the biggest advantage in using PowerShell scripts, is the ability to launch it locally to verify that everything works as expected, instead of standard “modify”, “launch the build”, “verify” approach needed if you use Build Tasks.

Now I customize the build to use this script to prepare my release, instead of relying on some obscure and hard to maintain string in Publish Artifact Task.

Include a Powershell Task in the build to prepare artifacts folder

Figure 6: PowerShell task can launch my script and prepare artifacts directory

Thanks to parameters I can easily specify: current configuration I’m building (release, debug), DestinationDir (I’m using the $(build.stagingDirectory) variable that contains the staging directory for the build). You can use whatever destination directory you want, but using standard folder is probably the best option.

After this script you can now place a standard Publish Build Artifacts task, specifying $(build.stagingDirectory) as the Copy Root folder, and filtering content if you need. Here is the actual configuration.

Publish build artifacts taks can be used to publish Powershell output

Figure 7: Include single Publish Build Artifacts to publish from directory prepared by PowerShell script

The only drawback of this approach is that we are forced to give an Artifact Name that will be used to contain files, you cannot directly publish pre-zipped file in the root source of build artifacts. If you want you can include multiple Publish Build artifacts to publish each zipped file with a different Artifact Name.

Build artifacts contains a single artifacts with all zipped file

Figure 8: Output of the build

But even if this can be a limitation, sometimes can be the best option instead. As you can see from previous image, I have a primary artifact and you can press the Download button to Download Everything with a  single click. Using Artifact Explorer you can download separate packages, and this is probably the best approach.

Artifacts browser permits you to download single zip files

Figure 9: Artifact browser shows distinct zip file in the output

If you use a Script to create one separate pre-compressed package for each separate artifacts, your publish experience will probably be better than any other approach.

Conclusions

Build vNext gives us great flexibility on what to publish as artifacts, but even if we can manage everything with dedicated task, if you want a good organization of your artifacts, using a PowerShell script to organize everything and pre-compressing in single files is usually the best approach.

Gian Maria.

Build vNext, support for deploying bits to Windows machines

One of the most interesting trend of DevOps movement is continuous deployment using build machines. Once you get your continuous build up and running, the next step is customizing the build to deploy on one or more test environments. If you do not need to deploy in production, there is no need of a controlled release pipeline (Ex: Release Management) and using a simple build is the most productive choiche. In this scenario one of the biggest pain is moving bits from the build machine to target machines. Once build output is moved to a machine, installing bits is usually only a matter of using some PowerShell script.

In Build, Deploy, Test scenario, quite often copying build output to target machine is the most difficult part

Thanks to Build vNext solving this problem is super easy. If you go to your visualstudio.com account, and choose the TEST hub, you can notice a submenu called machines.

image

Figure 1: Machines functionality in Visualstudio.com

This new menu is related to a new feature, still in preview, used to define groups of machine that can be use for deploy and testing workflows. In Figure 1 you notice a group called Cyberpunk1. Creating a group is super-easy, you just need to give it a name, specify administrative credentials and the list of the machines that will compose the group. You can also use different credentials for each machine, but using machine in Active Directory domain is usually the simplest scenario.

image

Figure 1: Editing of machine groups

Actually this feature still does not support Azure Virtual Machines, but you can easily target machine in your on-premise infrastructure. You just need to be sure that

  • All machines are reachable from the machine where the build agent is running
  • Check your DNS to verify that the names resolution is ok
  • All target machines should have Powershell remoting enabled
  • All target machines should have sharing of file system enabled
  • Firewall port are opened.

I’ve tested with an environment where both machines are running Windows Server 2012 R2, with latest update and file sharing enabled. Once you defined a machine group, you can use it to automatically copy files from build agent to all machines with a simple task of build vNext.

SNAGHTML3b78ea

Figure 3: Windows machine File Copy task

Thanks to this simple task you can simply copy files from build machine to destination machine, without the need to install any agent or other components. All you need to do is choose the machine group, target folder and source folder.

If you get error running the build, a nice new feature of build vNext is the ability to download full log as zip, where all the logs are separated by tasks

image

Figure 4: All build logs are separated for each step, to simplify troubleshooting

Opening the file 4_Copy … you can read logs related to the copy build step, to understand why the step is failing. Here is what I find in one of my build that failed.

Failed to connect to the path \\vsotest.cyberpunk.local with the user administrator@cyberpunk.local for copying.System error 53 has occurred. 
 2015-06-10T08:35:02.5591803Z The network path was not found.

In this specific situation the RoboCopy tool is complaining that the network path was not found, because I forgot to enable file sharing on the target machine. Once I enabled file sharing an running again the build everything was green, and I can verify that all files were correctly copied on target machines.

As a general rule, whenever a build fails, download all log and inspect the specific log for the task that failed.

image

Figure 5: Sample application was correctly copied to target machines.

In my first sample, I’ve used TailspinToys sample application, I’ve configured MsBuild to use StagingDirectory as output folder with parameter OutDir: /p:OutDir=$(build.stagingDirectory) and thanks to Windows Machine File Copy task all build output is automatically copied on target machines.

Once you got your build output copied on target machine, you need only to create a script to install the new bits, and maybe some integration test to verify that the application is in healthy state.

Gian Maria

Build vNext and continuous integration on GitHub

One of the great news of build vNext is the ability to create a build that targets source on GitHub project, not only on Git or TFVC repositories that are in current TFS or VSO instance. Given this, plus the fact that VSO has a 5 basic user license for Free, and you can use VSO as Continuous integration Engine for your GitHub projects.

To create a build that targets GitHub source code, you should simply login into your GitHub account, then navigate in your personal settings and finally choose “Personal Access tokens”

image

Figure 1: Personal access tokens in GitHub

I do not want to cover how GitHub tokens works, but basically each token has a set of capabilities associated with it, so you can decide the level of access bounded with each token. This is particularly important if you want to use that token to access only your public repositories and not the private ones.

image

Figure 2: Access levels associated to your token

Once you have a valid token you should write it down securely in some tool (I use KeePass) because the UI of GitHub will not give you way to easily retrieve that token. Once you have the token you can instruct VSO to create Continuous Integration build.

The first step is having a VSO account and create a Team Project, I’ve called my team Project GitHub, then navigate to the Build Tab. Once you press the Green Plus Button you are asked for the type of the build, I choose Visual Studio for this example because my GitHub Project contains Visual Studio solution.

Once the build editor is opened, you can go to Repository tab, then choose GitHub as repository type, and insert previously generated Access token. If the token is good, just wait for few seconds, and you should be able to see the list of GitHub repositories you have access to.

image

Figure 1: Configure the build to take sources from GitHub

Thanks to the access token, VSO build infrastructure can check if the repository gets new push, if you go to the Triggers tab you can ask VSO to trigger a build for each push in all the branches you want to monitor.

image

Figure 2: You can use CI even if the repository is on Git Hub

This is all you need to create a build. To verify that everything is ok you can simply trigger a new build, and, if the build has no specific requirement, you can use the Hosted Agent offered by VSO to build your project.

You can safely remove the Index Sources & Publish Symbols, because VSO cannot index source code outside your account. If you leave this task on the build, you will have warning during build.

image

Figure 3: Remove Indexing Sources task from your GitHub build.

Finally, in the General Tab, you can ask VSO to generate a Badge for the build.

image

Figure 4: Badge build

A badge is a simple url that renders an image that specify if the latest build succeded, failed or partially succeeded. Once you have a badge, you can include on the wiki page of your project in GitHub, simply including in the Readme.MD file in project root.

image

Figure 5: Build badge in action.

Et voilà, with few clicks you have Continuous Integration for GitHub Project using Visual Studio Online.

Gian Maria.

Build vNext, distributing load to different agents

One of the major benefit of the new build infrastructure of TFS and Visual Studio Online is the easy deployment of build agents. The downside of this approach is that your infrastructure become full of agents, and you should have some way to determine which agent(s) to use for a specific build. The problem is:

avoid running builds in machines that are “not appropriate” for that build.

Running on a specific agent

If you are customizing a build, or if you are interested in running the build on a specific agent in a specific machine (ex: local agent), the solution is super easy, Just edit build definition and in General tab add a demand named Agent.Name with the value of the name of the specified Agent.

image

Figure 1: Adding a demand for a specific agent

If the agent is not available, you are warned when you try to queue the build.

image

Figure 2: Warning on build queuing when there are agents problem

In this situation the system is warning me that there are agents compatible to the build, but they are offline. You can queue the build and it will be executed when the Agent will be online again, or you can press cancel and understand why the agent is not online.

If you want to run a build on a specific agent, just add Agent.Name demand on the build.

Specifying demands

The previous example is interesting because it is using Build Demands against properties of the agent. If you navigate on build agent admin page, for each agent you are able to see all associated properties.

image

Figure 3: Agent capabilities

On top there are User Capabilities that are editable. They are used to give custom capabilities to your agents. In bottom part there are System Capabilities, automatically determined by the agent itself and thus cannot be changed. If you examine all these capabilities you can see interesting capabilities such as VisualStudio that tells you if the agent is installed on a machine where Visual Studio is installed, other capabilities are also used to verify exact version of Visual Studio.

This is really important, because if you examine demands for a build you can verify that some of them are already placed in the build definition and they could not be removed.

image

Figure 4: some of the demands are read-only, you can add your demand

If you wonder why the build has some predefined, readonly demands, the answer is: they are taken from the build definition.

image

Figure 5: Build definition

If you look at the build definition, it contains a Visual Studio Build task, and it is this block that automatically adds the visualstudio demands on the build. This explain why that demand cannot be removed, it is required from a Build Step. You can try to remove Visual Studio Test task, and you can verify that the vstest demands is also gone.

Each Build Task automatically adds demands to be sure to run in compatible agents.

In my example I have some agents deployed on my office; they are used to run tests on some machines, and I use them to run build definitions composed only by build and test tasks, but no publish build artifacts. The reason is: I have really low upload bandwidth on my office, but I have fast machine on really fast SSD and tons of RAM so they are able to quickly build and test my projects.

Then I have some build used to Publish Artifacts, and I want to be sure that those build are not executed on agents in my office or they will saturate my upload Bandwidth. To avoid this problem I simply add uploader demands on these builds, and manually add this capability to agents deployed on machine that have no problem with Upload Bandwidth, Ex on agents deployed on Azure Virtual Machines.

You can use custom Demands to be sure that a build runs on agents with specific capabilities.

Using Agent pools to separate build environments

The final solution to subdivide works to your agent is using agent pools. A pool is similar to the old concept of controller, each build is associated to a default pool, and it can be scheduled only on agents bounded to that pool. Using different pools can be useful if you have really different building environments, and you want to have a strong separation between them.

A possible example is, if you have agents deployed on fast machines with fast SSD or RAMDisk to speedup build and testing, you can create a dedicated pool with a name such as FastPool. Once the pool is created you can schedule high priority build on that pool to be sure that the build will be completed in the least amount of time. You can further subdivide agents in that pool using capabilities, such as: SSD, RAMDISK, etc.

You can also create a pool called “Priority” to execute build with high priority and being sure that some slow build does not slow down high priority build and so on. If you have two different offices located very far away you can have a different pool for each office, to be sure that some builds are executed in local network for that office and so on.

With Agent Pools you can have a strong separation between your build Environment.

To conclude this post, you should use Agent Pools if you want to achieve strong separation and you are creating distinct Build Environment that shares some common strong characteristic (phisical location, machine speed, priority, etc). Inside a poll you can further subdivide work among agents using custom demands.

As final notice, as of today, with the latest update of VSO, the new build engine is not anymore in preview, the tab has no Asterisk anymore and the keyword PREVIEW is gone away :), so vNext in VSO reached GA.

image

 

Gian Maria.

TFS New Build System: vNext agents

With the latest Visual Studio Online update, the new build system is now online for all users. As I said in old post, it is completely rewritten and covering all new features really requires lots of time. Since I’m a great fan of Continuous Integration and Continuous Deploy procedures I’d like to do some post to introduce you this new build system, along with the reason why it is really superior to the old one.

In this post I’ll outline some of the improvements regarding build infrastructure management, starting from the most common problem that customers encountered with the older approach.

Each project collection needs a separate build machine

In the old build system you should install TFS bit and configure Agents and Controller to create a build machine, each machine can provide build capabilities only for a project collection.

With the new build system, Controller is part of TFS/VSO core installation, you should only deploy agents on build machines and an agent can provide build capabilities for an entire instance of TFS.

This means that an agent is connected to an entire TFS instance and it can build stuff from every Project Collection in your server. This simplify a lot your infrastructure because you can use a single build machine even if you have multiple Project Collections. If you are a small shop and you use multiple Project Collections to achieve full project separation, probably a single build machine is enough.

Deploying new agent is complex

With traditional build system you need to deploy Controllers and agents. Both of them requires a full TFS installation, followed by configuration of Build. Everything is administered trough standard TFS Administration Console.

With the new build system build controller is part of the core installation, you should only deploy agents in your machine.

Deploying an agent is a matter of downloading a file, unzip it and run a PowerShell script.

If you are interested in full installation details, you can follow step by step MSDN procedure here: https://msdn.microsoft.com/Library/vs/alm/Build/agents/windows. This aspect greatly simplify setup of a new agent.

I’ve multiple VSO accounts, I need multiple build machines

Since the agent is a simple Console Application you can install more agent simply unzipping agents file in different directories of the same machine. In the end you can have a single machine with agents connected to multiple VSO or TFS instances.

You can use a single build machine for multiple TFS2015 or VSO instances

Building is limited to windows machine

Since agents are installed with TFS Bits, the old build system only allow agents to run on a Windows Machine. Thanks to some Build Extensions you can run Ant and Maven build, but only with a Windows Box.

New build system provide agents also for non Windows machine thanks to the xPlat Agent. https://msdn.microsoft.com/Library/vs/alm/Build/agents/xplat 

You can run TFS/VSO build on non windows machine such as Linux and Macintosh.

Running local build is complex for developer

There are lots of reasons to run a build in local machines and with the old system you need to do a full TFS installation, leading to a maintenance nightmare. Usually you will end with a lot of stuff in your Continous Integration scripts, such as publishing nuget packages, and lots more. With such scenario, running a build in a local machine should be an easy task for developers.

New agent can run as a service, or run interactively with a simple double-click on the VsoAgent.exe file. Each developer can install agent in minutes and run the agents only when he/she really needs it.

Running local build is really easy because you can start local agent with a double click, only when you need to run a local build.

You can also attach a debugger to the build if you need to debug some custom code that can run during a build because it is a simple console application.

Keeping up with update is complex

If you are a large enterprise, you probably have several build machines and with the older version of TFS you should update all build agents and controller each time you updated TFS instance. With TFS 2012 Update 2 this requirement is relaxed, and older build system can target newer version of TFS. This will give you a timeframe to update all Build Machines after you updated TFS Core installation, but you really need to run TFS Update on all machines with build Agents or Controllers installed.

You will be happy to know that the new build agent has automatic update capabilities. It actually checks for new version at each startup of the agent, if a new version of the agent is available for the server you are connected to, it will automatically download, upgrade and restart the new version.

The only drawback is that, if you are running the agent as a windows service, you should restart the service for the upgrade check to take place. In future version we expect the agent to periodically check for an upgrade instead of requiring restart.

The overall feeling is that the new Build Infrastructure will be really easier to maintain and deploy, also it will give you new capabilities, like building on non-windows machine.

Gian Maria.