Manage Artifacts with TFS Build vNext

Artifacts and Build vNext

Another big improvement of Build vNext in TFS and VSO is the ability to explicitly manage the content of artifacts during a build. With the term Artifacts in Continuous Integration we are referring to every result of of the build that is worth publishing together with build result, to be further consumed by consumers of the build. Generally speaking think to artifacts as build binary outputs.

The XAML build system does not give you much flexibility, it just use a folder in the build agent to store everything, then upload everything to the server or copy to a network share.

To handle artifacts, vNext build system introduces a dedicated task called: Publish Build Artifacts.

Publish Build Artifacts options

Figure 1: Publish artifacts task

The first nice aspect is that we can add as many Publish Build Artifacts task we want. Each task requires you to specify contents to include with a  default value (for Visual Studio Build) of **\bin to include everything contained in directories called bin. This is an acceptable default to include binary output of all projects, and you can change to include everything you want. Another important option is the ArtifactName, used to distinguish this artifacts from the othter ones. Remember that you can include multiple Publish Build Artifacts tasks and Artifact Name is a simple way to categorize what you want to publish. Finally you need to specify if the artifact type is Server (content will be uploaded to TFS) or File Share (you will specify a standard UNC share path where the build will copy artifacts).

Artifacts browser

With a standard configuration as represented in Figure 1, after a build is completed, we can go to the artifacts tab, and you should see an entry for each Publish Build Artifacts task included in the build.

List of artifacts included in build output.

Figure 2: Build details lists all artifacts produced by the build

You can easily download all the content of the folder as a single zip, but you can also press button Explore to explore content of the artifacts container directly from web browser. You can easily use Artifacts Explorer to locate the content you are interested into and download with a single click.

With artifacts browser you can explore content of an artifacts directly from browser and download single contents.

Figure 3: Browsing content of an artifact

Using multiple Artifacts Task

In this specific example, using **\bin approach is not probably suggested approach. As you can see from previous image, we are including binaries from test projects, wasting space on server and making more complex for the consumer to find what he/she needs.

In this specific situation we are interested in publishing two distinct series of artifacts, an host program and a client dll to use the host. In this scenario the best approach is using two distinct publish artifacts task, one for the client and the other for the host. If I reconfigure the build using two task and configure Contents parameter to include only the folder of the project I need, the result is much better.

Multiple artifacts included in build output

Figure 4: Multiple artifacts for a single build output

As you can see from previous image, using multiple task for publishing artifacts produces an improved organization of artifacts. In such a situation it is simple to immediately locate what you need and download only client or host program. The only drawback is that we still miss a “download all” link to download all artifacts.

Prepare everything with a Powershell Script approach

If projects starts to become really complex, organizing artifacts can start to become a complex task. In our situation the approach of including the whole bin folder for a project is not really good, what I need is folder manipulation before publishing artifacts.

  • We want to remove all .xml files
  • We want to change some settings in the host configuration file
  • We need to copy content from other folders of source control

In such a scenario, Publish Artifacts task does not fulfill our requirement and the obvious solution is adding a Powershell Script in your source code to prepare what you are calling a “release” of artifacts. A real nice stuff about PowerShell is that you can create a ps1 file with the function that does what you need and declare named parameters

    [String] $Configuration,
    [String] $DestinationDir = "",
    [Bool] $DeleteOriginalAfterZip = $true

In my script I accepts three parameters, the configuration I want to release (Debug or Release), destination directory where the script will copy all the file, and finally if you want the script to delete all uncompressed files in DestinationDirectory.

The third option is needed because I’d like to use 7zip to compress files in output directory directly from my script. The two main reason to do this are

  • 7zip is a better compressor than a simple zip
  • It is simpler to create pre-zipped artifacts

Using Powershell script has also the great advantage that it can be launched manually to verify that everything goes as expected or to create artifact with the exact same layout of a standard build, an aspect that should not be underestimated. Once the script is tested on a local machine (an easy task) I have two files in my output directory.

Content of the folder generated by PowerShell script

Figure 5: Content of the output folder after PowerShell script ran

One of the biggest advantage in using PowerShell scripts, is the ability to launch it locally to verify that everything works as expected, instead of standard “modify”, “launch the build”, “verify” approach needed if you use Build Tasks.

Now I customize the build to use this script to prepare my release, instead of relying on some obscure and hard to maintain string in Publish Artifact Task.

Include a Powershell Task in the build to prepare artifacts folder

Figure 6: PowerShell task can launch my script and prepare artifacts directory

Thanks to parameters I can easily specify: current configuration I’m building (release, debug), DestinationDir (I’m using the $(build.stagingDirectory) variable that contains the staging directory for the build). You can use whatever destination directory you want, but using standard folder is probably the best option.

After this script you can now place a standard Publish Build Artifacts task, specifying $(build.stagingDirectory) as the Copy Root folder, and filtering content if you need. Here is the actual configuration.

Publish build artifacts taks can be used to publish Powershell output

Figure 7: Include single Publish Build Artifacts to publish from directory prepared by PowerShell script

The only drawback of this approach is that we are forced to give an Artifact Name that will be used to contain files, you cannot directly publish pre-zipped file in the root source of build artifacts. If you want you can include multiple Publish Build artifacts to publish each zipped file with a different Artifact Name.

Build artifacts contains a single artifacts with all zipped file

Figure 8: Output of the build

But even if this can be a limitation, sometimes can be the best option instead. As you can see from previous image, I have a primary artifact and you can press the Download button to Download Everything with a  single click. Using Artifact Explorer you can download separate packages, and this is probably the best approach.

Artifacts browser permits you to download single zip files

Figure 9: Artifact browser shows distinct zip file in the output

If you use a Script to create one separate pre-compressed package for each separate artifacts, your publish experience will probably be better than any other approach.


Build vNext gives us great flexibility on what to publish as artifacts, but even if we can manage everything with dedicated task, if you want a good organization of your artifacts, using a PowerShell script to organize everything and pre-compressing in single files is usually the best approach.

Gian Maria.

Deploy remotely with TFS build

It is time to connect together a couple of posts of mine, in the first I simply explained how to deploy a web application to a remote machine with the use of Beyondexec2, in another one I explained how to create a simple tfs build, that actually does not build anything, but execute a simple workflow.

In this post I’ll cover a primitive build workflow to deploy the result of another build. The starting point is having a build called “Demo” that builds a web site and create the installer package, plus the script described here. You need to insert the scripts and the PsExec utility in the source code of your team project, to be available from the build agent during the build, as shown in Figure 1. Note: in this example I’ll use the PsExec tools instead of beyondexecv2, but they are exactly equivalent, PsExex is more maintained tool and works better when execute in services.


Figure 1: Include deploy related files into source control system

Now you need to modify the deploy script created in the other blog post, adding all the operations needed to deploy a build, first of all you need to define some more parameters (Figure 2). These one are the number of the build to use (Es. demo_20100607.3) machine name where you want to install and the password of the administrator account of that machine.


Figure 2: Parameters of the workflow

Now, since the tools and script to do remote deploy are stored in source control system, the build scripts needs to create a workspace and do a getlatest; to do this, you can reuse the relative section of the standard workflow showed in Figure 3.


Figure 3: Details of workspace management

The steps from Figure 3 is taken from the standard workflow, and it is a common sequence of operation to create a workspace and do a getlatest plus managing some variables. If you run this workflow, as is, you can verify that in the build machine a new workspace is created, and you can browse to the build directory (usually c:\builds\1\teamprojectname\etcetc) to see downloaded files. But before doing this, you need to specify folder to grab in the workspace section of build configuration. As you can see in Figure 4, I simply need to grab the BuildTools subdirectory, because there is no need to do a get latest of project sources, but only of the deploy scripts.


Figure 4: Configuration of the workspace
Now I need only to execute the PsExec process to do a remote execution of the script in the machine where I want my web application to be deployed, and this can be done thanks to a simple Invoke Process Activity, as shown in Figure 5.

Figure 5: Invoke Process activity permits execution of an external process.

FileName property specify the process to execute, for this example is

SourcesDirectory + "\psexec.exe"

Since SourcesDirectory is the one used to map the workspace, I can execute the psexec directly from there. The other important property is the Arguments one:

"\\" + DeployMachine +  " -u " + DeployMachine + "\administrator -p " + DeployMachinePassword +
" /accepteula -i -f -h -c " + SourcesDirectory + "\deploy\Deployweb.bat " + BuildToUse

This is only a combination of Workflow Parameters to create the argument list, the /accepteula parameter is needed because the psexec shows an eula that should be accepted, and clearly there is noone to click on accept when executed on a service Smile, then the option –c force a file to be copied to remote computer and executed. After the Invoke process, in Figure 6 I showed the end of the workflow, with a condition that verifies if the PsExec return value is zero (success) or greater than zero (error).


Figure 6: Check return value of PsExec and fail the build if greater than zero.

The SetBuildProperties activity permits to set a property of the build, in this situation I set the status as Failed. Now you can configure a build, configure parameters and see the result.


Figure 7: Log of a successful build.

The only drawback is that you only see output of the psexec program and does not see the output of the execution of DeployWeb.Bat on remote machine. Since you can specify, machine and build number to use, this is a good build script to deploy something on remote machine with a simple click.


P.S. this is the first post following Adam Cogan’s SSW Rules (thanks Adam, you rock)

– the balloon rule, instead of walls of text

– the figure/caption rule

Wrap a MsBuild Custom task inside a custom action

If you have an MSBuild custom task that you want to reuse in a TFS 2010 build workflow, you have two solution. The first is using the MsBuild activity as I described in this post, but this approach has a lot of limitations.

First of all it is clumsy, because you have to pass custom task parameters as arguments to msbuild, but the worst problem is that you lose the ability to use output properties of the custom task. Suppose you have a TinyUrl custom task, that takes an url as input and gives back the tined version, this custom task has this implementation.


Now suppose you do not have this source code, so you really need to use the MsBuild Custom Task; if you simply use the MsBuild activity as described in the previous post, how can you grab the TinedUrl output property and pass its value to the workflow engine?

To solve this problem you can use another approach to reuse a Custom MsBuild task in a tfs 2010 build, because you can wrap the task execution in a custom activity. First of all we need to fool the Custom MsBuild Task that it is executing inside MSBuild. A first problem is, how can I intercept the inner calls to Log.LogMessage or Log.LogWarning that are inside the CustomTask and pass them to the workflow engine? The solution is this simple class.

class WorkflowBuildEngine : IBuildEngine
    public CodeActivityContext Context { get; set; }

    public WorkflowBuildEngine(CodeActivityContext context)
        Context = context;


    public void LogErrorEvent(BuildErrorEventArgs e)
        Utils.LogError(e.Message, Context);

    public void LogMessageEvent(BuildMessageEventArgs e)
        Utils.LogMessage(e.Message, Context, BuildMessageImportance.Normal);

    public void LogWarningEvent(BuildWarningEventArgs e)
        Utils.LogWarning(e.Message, Context);

    public string ProjectFileOfTaskNode
        get { throw new NotImplementedException(); }

It Implements IBuildEngine, its constructor requires a CodeActivityContext that is used inside the LogErrorEvent, LogMessageEvent and LogWarningEvent methods to forward log message issued by the custom task to the workflow engine. In this way every log that takes place inside the MsBuild custom Action gets forwarded into the workflow engine. Finally you need to create the TinyUrl custom activity that wraps the custom MsBuild task:

public sealed class TinyUrl : CodeActivity<String>
    public InArgument<string> Url { get; set; }

      protected override String Execute(CodeActivityContext context)
        TinyUrlTask wrappedTask = new TinyUrlTask();
        WorkflowBuildEngine engine = new WorkflowBuildEngine(context);
        wrappedTask.BuildEngine = engine;
        wrappedTask.Url = Url.Get(context);
        if (!wrappedTask.Execute())
            Utils.LogError("Tiny url task failed", context);
       return wrappedTask.TinedUrl;

The first important aspect is that it inherits from CodeActivity<String> instead from a simple CodeActivity, this because this activity will return a string (the tined url) so the type parameter instruct the workflow on the return type of the action. The execute is different too because it should return the result of the action. As you can see the first operation is creating the MsBuild custom task and a WorkflowBuildEngine that gets assigned to the BuildEngine property of the task. After the Engine is ok, you need to populate all the input properties of the MsBuild Custom task, and then call execute.

If the return value of execute is false the action logs the error (so the build partially fails) and finally returns the value to the caller because output properties of MsBuild custom Tasks are simple properties, so the tined url is in the TinedUrl property of the task. The good part of this technique is that you can use this action from the graphical designer.


If you compare with the approach that uses MsBuild Activity you have several advantages. First you can use the graphical designer, then you can edit the property with the full editor of workflow foundation and finally you can use output properties. I inserted a WriteBuildMessage after the TinyUrl Custom Activity to verify if the TinedUrl property is correctly set by the action. If you run the build you can verify that everything is good.  I placed two TinyUrl custom activity inside the workflow, the second one tiny the url, just to trigger the warning inside the MsBuild custom Task.


If you look at the first picture of this post, you can verify that the warning “There is no need to tiny the url because is less than 20 chars” is a warning issued internally by the custom MsBuild task, and you are looking at it thanks to the WorkwlofBuildEngine class that forward MsBuild log calls to workflow environment.



Log warning and errors in a custom action

Some time ago I blogged about logging in custom action for TFS build 2010, I left out some details. Suppose you want to create a warning or an error and not a simple message, you need to create a specialized version of the LogWarning that logs a real warning.


You can do the same with errors.


These two methods permit you to log warnings and errors during a custom build action execution, let’s see how they affect the output. First of all you can verify that when you log an error the build partially succeeds


The error and warning are reported in the detailed report with their right icons


And they are also reported in the “View Summary” of the build


Next time I’ll explain you how to wrap a msbuild Custom Task in a custom action.



Branching policies

I just read this post of Martin Fowler, and I found it very interesting. In my opinion, even small projects will greatly benefit from Continuous Integration. Despite of the Branching policies that you choose, having a machine for CI is vital during the lifetime of a project.

Usually I do not like very much CherryPicking even if sometimes it cannot be avoided. In the Promiscuous Integration model, people are doing CherryPicking from other branches and this scares me. The purpose of a branch is to keep changes isolated until they are ready to be moved in the trunk, or to keep copies of specific version of the software, and usually merging between branches can be problematic. Some Source control system do not permit to merge changes between branches that are not contiguous.

In the example made by Fowler, with Promiscuous Integragion, DrPlum and Reverend Green works together in two different branches, but since they have great communication, they periodically merge from one branches to another, but this scheme is a little confused in my opinion.

A better strategy could be this one. Since DrPlum and Reverend Green have great communication, it can be possible to work in different way. Say DrPlum decide to create a new big feature, so he creates a branch and starts to work on that branch. Suppose he do not want to integrate in the trunk until it is completely finished, so he do not use CI.  He ask to other members of the team if someone is working on a branch, all people say no, so he create a new branch and start working on it.

After some time Reverend Green want also to create a new big feature, he ask to the team, and DrPlum says him that he is also working on a feature that have some code in common with the new feature of REverend Green. At this time both Reverend Green and Dr Plum create another branch from the original branch of DrPlum. The situation is the following, we have the trunk, then we have the first branch of DRplum (call it B1), from this branch we have two other branches, one for Reverend Green (B2G), and the other for DrPlum (B2P). Now we can configure a Continuos Integration Machine to integrate B1. Now both DrPlum and Reverend green are working on isolated branches, but they merge changes often with B1, so they never need a big merge. In this scheme B1 acts like a trunk for both developers. As bugfix or changes are made in the trunk, one of them bring those changes in B1, verify that all tests are good, and when B1 is stable again, each one propagates changes from B1 to his own branch.

With such a scheme, two developers can work together on new feature that have code in common, without the risk of big merge, using Continuos Integration, but avoiding put stuff in the trunk until the work is finished. Say Reverend Green has finished his feature, he first integrate all trunk changes in B1, verify that everything is ok, then merge changes from B1 to B2G, when everything is ok he merge remaining changes from B2G to B1 (in the meanwhile DrPlum could have changed B1), then from B1 to the trunk, and the game is done. The same is done from DrPlum when he finished.

I admit that I never used such a complex scheme, because I prefer to have developers continuously merge in the trunk, and thanks to integration problems are mitigated.