Publish NuGet Package to a private NuGet Server with TFS Build and Symbol Server

Previous post on the series

After you set automatic publishing of NuGet packages with automatic assembly and NuGet version numbering in a TFS Build, you surely want to enable publishing symbols on a Symbol Server. This will permits you to put a reference to your NuGet Package and then being able to debug the code thanks to Symbol Server support with TFS. Publishing symbols is just a matter of specifying a shared folder to store symbols in build configuration, but if you enable it in previous build where you publish with Powershell, it does not work. The reason is, you are running PowerShell script that publish NuGet package after build (or after test), but in the build Workflow, source indexing happens after these steps.



Figure 1: Build workflow sequence showing that publish of symbols takes place after NuGet package is published

The problem happens because you should publish your NuGet package after publishing took place and your .pdb files are modified to point to sources in TFS. To fix this problem you simply need first to download the Build Workflow, (in TFS 2013 default build Workflows are not stored directly in Source Control and they should be downloaded if you want to customize them) and create a custom build.


Figure 2: Downloading the standard Workflow Template in your machine to create a custom Workflow

You can just download to your computer, change name of the file, change inner workflow and check-in in Source Control as you would have done with previous version of TFS. My goal is adding the ability to run another script at the very end of the workflow, so I opened the workflow and simply copy and paste the Run optional script after Test in the end of the sequence and change name to Run optional script at the end of the Build.


figure 3: Copy and paste script execution block to enable executing script at the end of the Workflow

Now I added two other Workflow Arguments , to allow the user to specify the location and arguments of this script.


Figure 4: Adding arguments to pass to the new script execution block.

Now you should change the Run optional script at the end of the Build block to reference these two new arguments instead of the original ones.


Figure 5: Referencing the new arguments in copied block

Finally you need to return to Arguments of the Workflow and change the Metadata argument, to specify some additional data about these two arguments.


Figure 6: Adding Metadata for your custom arguments.

Here you can give name and description to your arguments, but the most important part is giving a category and the Editor. In this example the PostExecutionScriptPath should be a Source Control path like $/TeamProjectName/xxx and if you specify

Microsoft.TeamFoundation.Build.Controls.ServerFileBrowserEditor, Microsoft.TeamFoundation.Build.Controls

As Editor, the user would be able to browse the source control as for the other script in the build. You should check-in the new workflow in source control, edit the build definition and in Process tab choose the new workflow. You should be able now to see the new arguments to specify the script that should be run at the end of the Build.


Figure 7: You can now specify a script that will be executed at the very end of the build.

Thanks to the Editor, if you select the Post Execution Script Path, you will find an ellipsis button that permits you to browse the source to specify file location.


Figure 8: Thanks to the Editor property in metadata you are able to browse the source to specify the script.

You can now use the very same script of NuGet publishing, but since it is executed after symbols publishing, now your NuGet package contains indexed pdb and everything works as expected.

Gian Maria.

Deploy from a Team Foundation Server build to Web Site without specifying password in the build

In a previous article I explained how to deploy an ASP.NET Web Site from a TFS Build thanks to MSDeploy engine. One of the great complain you can have with this solution is the need to specify UserName and password in build configuration and the need to use the AllowUntrustedCertificate=true.

The problem of the certificate is the simpler of the two to solve, you just need to use a certificate that is trusted inside your organization or a certificate issued by a trusted certificate authority (es godaddy), instead of the default one that is generated by MsDeploy configuration in IIS. This is most an administrative stuff and I’m not going to cover it in this post.

The most annoying stuff is getting rid of the password. You should start configuring the deploy user for IIS using the same user that runs TFS Build, this will give to build user the permission to execute the deploy.


Figure 1: Give to the build user publish permission.

This is not enough, if you fire your bulid you probably will receive an authorization error.

C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web\Microsoft.Web.Publishing.targets (4255): Web deployment task failed. (Connected to the remote computer (“webtest1.cyberpunk.local”) using the Web Management Service, but could not authorize. Make sure that you are using the correct user name and password, that the site you are connecting to exists, and that the credentials represent a user who has permissions to access the site.  Learn more at:

The dreaded ERROR_USER_UNAUTHORIZED can frustrate you for long time, because it is not so easy to solve. First of all you should check if the Windows Authentication is enabled in IIS configuration. Just go to the Web Server and verify settings of Management Service



Figure 2: You should configure Management Service to use Windows Authentication


Figure 3: You should be sure that Windows Credentials is enabled.

This is usually not enough, you should also be sure to add a value in the registry, you should locate HKLM\SOFTWARE\Microsoft\WebManagement\Server key and then add a DWORD value named WindowsAuthenticationEnabled with the value 1. Finally you should restart management service (net stop wmsvc and then net start wmsvc).

If you run the build you will probably still get the ERROR_USER_UNAUTHORIZED error, this is caused by parameters passed to MsBuild. There are two parameters you need to pass to MSBuild to have integrated security works and they are:

1) /p:UserName=””

2) /p:AuthType=NTLM

An empty username parameter is needed for it to work, if you forget to specify it, you will get ERROR_USER_UNAUTHORIZED even if the user running the script (tfsbuild) has all the right to deploy the site. The AuthType parameter is telling to the server to use Windows Authentication.

Now your build should be green and you have no credential stored inside the build definition and your are not breaking any security good practice.

Gian Maria.

Build, Deploy, Web Performance test with TFS build

To fully understand this article you need to read previous articles of the series

In those articles I’ve explained how you can automatically publish your web site to a standard IIS hosted web site or to an Azure Web Site. The cool part is that you just need to add extra MsBuild arguments to TFS Build process definition and the game is done. For Azure Web Sites you also have a dedicated template for publishing that will also manage an integration with the Azure Web site dashboard (as you can see in the following figure)


Figure 1: Azure web site deployments tab

If you use the standard DefaultTemplate workflow (with MsBuild arguments) Deployments Tab will not be populated, but you can use it to also deploy TF Service project that uses Git (actually not supported by the standard azure deploy template).

But the main question is: why I should care of creating a structure for Automatic Deploy and where are the Benefits?

One of the main purpose of Continuous Deployment is to deploy to test environment and execute a series of integration test to find problems as soon as possible. The goal is:

Fail as early as possible

Even if your team is committed to Unit Testing, we all know that many problems arise from wrong deployment, wrong configuration files in production, bug with production data and so on. All these problems cannot be addressed by standard Unit Testing, and there is the need for a suite of Integration tests, that exercise the whole software deployed to an environment. Another reason is: some of the errors we discovered in production are related to manual deploy. People can make errors, especially if: deploy is a task that is done infrequently, it is poor documented and it is a real boring process (bored people tend to have less attention). The most common situation of failure is new undocumented configuration created by developers; the symptom is: you follow all the instructions of deploy document, but a section of the software is failing for apparently no reason. After a little bit of investigation you will find that some new settings introduced a couple of months ago is misconfigured (including calling developers in the night to understand what went wrong)

In this scenario the first advantage of an Automatic Deployment environment is: engineering team is forced to maintain automatic deployment scripts healthy. When it is time to deploy in production, operations team can use the same automatic procedure to minimize deploy errors. Another advantage of automatic deployment is detailed log of every operation done from the scriptsthat allows you to diagnose problems quickly.

One key part of the process is automatic validation of deploy, where the focus is: spotting as much errors as possible with automatic procedures. Having a good set of integration test, capable of validating a deploy, can save a lot of time and also give to the whole team a good confidence on the deploy process so they can deploy often.

This is a simple scenario: application with a Sql Server Database.

Thanks to Sql Server database project, it is easy to automate the creation / upgrade of database schema, as well as preloading some test data to being able to do some test after the deploy. If you do not use database project, or if you want to run integration test against some production data, the team should maintain a backup of a dedicated test database and you should personalize build deployment scripts, to automatically restore databases from well-known backup after the deploy of the web site.

Apart from technique used to manage test databases, you will need to exercise deployed software and with Web Site one of the best solution is using Visual Studio Web Performance tests. A Web Performance Test is meant to be used in load test, but it contains also assertion and rules to verify that the site is responding as expected. Another interesting aspect of Web Performance Tests is that they can be run with a standard MsTest runner, so you need no personalization of the TFS Build script to run them after an automatic deploy. Moreover it is a recording of HTTP calls and can be performed without the need of an UI. The drawback of this technique is that he does not run javascript code. If your site has a lot of javascript code you should use different technique, like Coded UI Test.

I’ve started this demo creating a couple of WPT.



Figure 2: A couple of Web Performance Tests in a integration test project

The SmokeCallPages is a simple test that basically click on every page of the site, thanks to the IE recorder plugin, recording such a test is really simple. Once recorded you should add some assertion, in production configuration I have CustomErrors=”on” in the web.config file, so even if the site raise some exception, the user is redirected to a standard page with a warning “we are experiencing some mechanical problem”. This will lead to adding a rule that fail test if the HTML of the page contains this string. The rule is then applied to each response of the Test.


Figure 3: A validation rule that makes the test fails if a certain string is contained in the HTML.

You can write complex tests, you can create base web performance tests that performs login and use them to create a series of tests that exercise private part of the site, write plugins and many other features. Once tests are finished, you should parameterize web sites, as you can see in Figure 3, where the url of the site to test is {{TailspinToysTestSite}} and not a real address. Everything enclosed in {{ }} is a context parameter, in test definition its value is set to http://localhost:13230, but it can be changed with Environment variables. If you setup an environment variable for the computer called Test.TailspinToysTestSite (name of your context parameter prefixed by Test.) it will override the value stored in the test.

Once everything is in place, I use a Build Agent inside a Virtual Machine, (where I’ve set the Test.TailspinToysTestSite environment variable to point to my Azure Test Site), then I configured a standard tfs build, added right MsBuild arguments to deploy web site and finally I specified the integration tests to run.


Figure 4: Configuring integration test to be run during the build

All you need to do is going to Automated Tests Section. He you will find a standard test run configuration that runs all tests that are in assemblies that contains the word “test” in it. If you execute the build you will find that the integration tests will not execute, this because TFS 2012 has a new agile test runner that is not capable of running Web Performance Test. The solution is adding another test run, specify MSTest.exe runner (VS2010 compatible) as the Test Runner and changing the Test Assmbly file specification to run everything that has an extension of .webtest (Figure 4). Now you have two distinct test configuration.


Figure 5: The two test runs specified in the build.

I also set to true the Fail Build on Test Failure parameter, because I can tolerate a standard unit test failing, but if a single integration test fails it means that something basic is broken in the deploy and I want to react immediately. Since all test results are automatically pushed to TFS it is simple to understand the reason of a failure as you can see in the following build Summary.


Figure 6: Build summary contains all test run information

There are two distinct test runs, the first one is using the standard agile test runner and runs standard unit test, it is then followed by the results of integration testing. The nice aspect of build summary is that it shows immediately the name of the test that is failing (SmokeCallPages ), the whole test result can also downloaded locally to better identify the cause of the problem.


Figure 7: Test results can be downloaded locally to examine the root cause of the problem and the exact request that is failing.

From test result you can spot immediately the request that failed and if you have some form of logging enabled, like Elmah, you can have a better clue on what is happened simply looking at log page. Test Results contains also a lot of information like full request and full response, so dev team can insert some diagnostic information in the page response visible only when the site is deployed in Test environment (like an hiddenfield with the full exception error or some internal error code). Even if you do not use such technique, Elmah can give at least full exception details in a dedicated web page.


Figure 8: Elmah handler is showing the error.

Thanks to this simple Build / Deploy / Integration Test workflow integration errors are immediately spotted during developing cycle. The more integration tests are written by the team, the more bugs are discovered automatically so the team can immediately react and fix them. Having integration Tests allows also to spot builds that are good to send to the Test Team, because it is not worthy to waste time of your Test Team into testing a really bugged build.

In this example the problem was a malformed XML file that contains the “About Us” text (easy spotted by the elmah log), now the team can fix Xml file, verify the fix running the integration test that failed locally and finally check-in the code. The next build run will verify if everything is good.


Figure 9: Check-in fixed the problem, the build is now green.

The cool part of Integration Tests with Web Performance Tests is that you can download test results even for green build to gather metrics: see page timing, response size, etc.


Figure 10: Build summary shows all test passed.

Remember that with Web Performance Test you can create assertion on response size, response time and write plugin to verify whatever you want, so they are a real powerful tool in your toolset to test your Web Application.

Gian Maria

Deploy Asp.NET web site on IIS from TFS Build

In the last article of the series, I dealt with Deploying on Azure Web Sites from on-premise TFS, but the very same technique can be used to automatically deploy from a standard TFS Build to a standard Web Site hosted in IIS and not in Azure. For this demo I’ve prepared a VM on azure, but the configuration is the very same if the VM is on-premise or if you use a physical machine to run IIS. The only difference between deploy on Azure Web Site is that we are deploying on a Web site hosted on IIS.

Step 1: Configure IIS for Web Deploy

You can find a detailed article here with all the steps needed to configure Web Deploy Publishing, once Microsoft Web Deploy is installed, just create a site, and enable Web Publishing right-clicking on it. If the Deploy menu does not appears, some part of the Web Deploy Publishing service was not installed properly.


Figure 1: Configure Web Deploy Publishing

Configuring a Web Deploy publishing for a web site is just a matter of specifying some information and most of the time everything can be left as default value.


Figure 2: Deploy configuration

This dialog will setup the site to enable MSBuild publishing, it saves a .publishSettings file on the location specified (in this example the desktop). The most important settings is the url for the publishing server connection. Be sure that used port (8172 in this example) is opened in the firewall and that all router are configured to make it available to the machine where the build will be run.

Step 2: Configure the Publish settings file

Even if the configuration dialog shown in Figure 2 creates a publishSettings file, it is possible to create such file directly from Visual Studio. This is usually a preferable option because it is possible to customize to support Database Publishing and changing the connection string. Actually creating a publish settings from scratch from Visual Studio is really easy, you just Right Click the web project and choose publish, then choose to create a new publishing profile.


Figure 3: Configure the connection for publishing

The Validate connection button is really useful to verify if everything works correctly, once it is green it is possible to further customize the settings. A most common configuration, when you work with Database Projects, is the ability to configure automatic database schema publishing.


Figure 4: Configure for automatic Database Update and change the connection string used in destination server

Once everything is correctly configured you can close the configuration dialog, save the publish settings and check-in everything in source control. To verify if everything work you can do a Test Publish from Visual Studio, just be aware that the location of the .dacpac file should be changed to be found by the Build Controller (described later in this article)

Step 3: Configure the build

This is the most easy step, since it is the very same of Deploying on Azure Web Site from on-premise TFS, I’ve actually cloned a build used for that post, and simply changed the name and credentials of the publish settings file used.


Figure 5: Configure MSBuild Arguments to deploy the site

The whole string is the following one, you are simply asking to MsBuild to publish the site using the AzureVM profile.

/p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:AllowUntrustedCertificate=true /p:UserName=gianmaria.ricci /p:Password=xxxxxxxxx

Clearly, to being able to publish database schema, you should manually change the location of the .dacpac file as described in previous article.

As a final note, always configure the build to index sources using a Symbol Server. If a Symbol Server is configured and there is a bug in production site that is not reproducible dev machines, it is possible to use the Intellitrace standalone collector to collect an intellitrace file, and once the itrace file is loaded in Visual Studio it will automatically download original source files used to compile the version of the site used to generate the trace.


Figure 6: Once symbol server is configured, I can browse the source from intellitrace file

This permits to simply open the intellitrace file offline, navigate through events, and have VS automatically download Source code for you, without even knowing where the solution is located.

Gian Maria.

Continuous Deployment on Windows Azure with Database projects

I’ve already blogged about Deploying on Azure Web Site with Database Project in the past, but in that article I showed how to accomplish it with customization of the Build Template. That technique is useful because quite often you need to run custom scripts or tools to do additional deploy related procedures to the site, but if your need is simply deploying schema change to an Azure Database with a Database Project you can accomplish it even without the need to touch the Build Workflow.

First of all download the publishing profile from your azure site.


Figure 1: Download publishing profile from Azure Web Site

Once you have publishing profile, it can be imported in Visual Studio, just Right-Click on Project node inside Visual Studio and choose Publish. From there you can import publishing profile just downloaded and modify it for your need.


Figure 2: Importing publishing profile from Visual Studio

From the settings tab you have the option to specify a .dacpac file to update database schema.


Figure 3:  Use a .dacpac to deploy database schema changes

Unfortunately this configuration is good from direct publishing with Visual Studio, but to make it works during a TFS Build you need to do some manual modification. Once the modified publishing file is changed, you can find it inside Properties/PublishProfiles node of your project, and you can edit it with a simple XML Editor.


Figure 4: Modify the location of the dacpac to match the location this file will have during TFS build

The trick here is modifying the path of the .dacpac file to match its location in the build server during the build. Knowing the location is quite simple, usually I map an entire branch (trunk, main or some release branch) as root dir for the build


Figure 5: Workspace configuration of Deploy build

Now I need to examine the structure of project folders, suppose the trunk has a sub-folder called TailspinToys that contains my web site project I want to deploy (called Tailspin.WebUpgraded).


Figure 6: Folder structure of my project

When the build takes place, all build output (including all .dacpac files) are stored in a folder called bin that is one level above the folder you mapped in the build workspace. To locate the build file during publish, I need to go three folder up to locate the parent of the trunk (my project is in Trunk\TailspinToys\Tailspin.WebUpgraded) then adding bin and the name of the .dacpac file. So the location of dacpac file is ..\..\..\bin\xxxxx.dacpac as you can see in Figure 4. Do not worry if it seems complicated, it is just a matter of a couple of tentative to find the root folder ;).

Once the publish file was modified and checked-in you can use it in your build definition.


Figure 7: Choose your new publish file as a publishing profile for the build.

Now you can run the build and database linked to the Web Site will be automatically updated with the definition of your Database Project.

Gian Maria.