Completely remove Lab Management configuration in TFS

If you want to completely remove Lab Management configuration from your TFS instance, you probably know TfsConfig lab /Delete command, used to remove association between one Project Collection and SCVMM. The reasons behind the need to completely remove Lab Management configuration could be various, one of the most common is: you created a cloned copy of your TFS environment for testing purpose, and you want to be 100% sure that your cloned instance does not contact SCVMM, or you can simply have multiple Test TFS Instance and you need to move lab management from one instance to another.

Actual configuration, with PreviewCollection with Lab Management features enabled.

Figure 1: PreviewCollection has Lab Management configured.

In the above picture you can see that my PreviewCollection has Lab Management feature enabled, so I can simply run the command TfsConfig lab /Delete /CollectionName:PreviewCollection to remove this association.

TfsConfig lab /delete command in action

Figure 2: TfsConfig command in action.

When command completes you can verify that the collection has not Lab Management feature enabled anymore.

Project collection now have lab management feature disabled

Figure 3: PreviewCollection now has Lab Management feature disabled.

After running that command for all your Lab Management enabled Team Project Collections you can be disappointed because you still see SCVMM host configured in TFS Console.

Even if none of the team projection collection is configure, scvmm host is still listed

Figure 4: Even if none of the team projection collection is configure, scvmm host is still listed

This is usually not a big problem, but if you want to be 100% sure that your TFS installation does not maintain any connection to the SCVMM instance used to manage your Lab, you can use a simple PowerShell script you can find on this Blog Post. That post is related to TFS 2010, but the script it is still valid for newer TFS releases. To write this blog post I’ve used a TFS 2015 instance and everything went good.

In that post you have an alternative solution of directly updating Tfs_Configuration database, but I strongly discourage you to use that solution because you can end with a broken installation. Never manipulate Tfs databases  directly.

Lab Management is completely removed from your TFS instance

Figure 5: Lab Management is completely removed from your TFS instance

Now lab management configuration is completely removed from your TFS instance.

Gian Maria.

Error in AsyncProcessMessage during deploy phase of a TFS Lab Management Build.

I have a customer that experiences this error during a build with Lab Management:

Exception Message: Team Foundation Server could not complete the deployment task for machine xxx ….
Server stack trace:   
at Microsoft.TeamFoundation.Lab.Workflow.Activities.RunDeploymentTask.ExecuteDeploymentTask.RunCommand(AsyncState state)  
at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Object[]& outArgs)   at System.Runtime.Remoting.Messaging.StackBuilderSink.AsyncProcessMessage(IMessage msg, IMessageSink replySink)Exception rethrown
at [0]:    at System.Runtime.Remoting.Proxies.RealProxy.EndInvokeHelper(Message reqMsg, Boolean bProxyCase)  
at System.Runtime.Remoting.Proxies.RemotingProxy.Invoke(Object NotUsed, MessageData& msgData)  
at System.Action`1.EndInvoke(IAsyncResult result)  
at Microsoft.TeamFoundation.Lab.Workflow.Activities.RunDeploymentTask.ExecuteDeploymentTask.EndExecute(AsyncCodeActivityContext context, IAsyncResult result)  
at System.Activities.AsyncCodeActivity.CompleteAsyncCodeActivityData.CompleteAsyncCodeActivityWorkItem.Execute(ActivityExecutor executor, BookmarkManager bookmarkManager)

This customer is developing a complex software that uses OpenGL to render on the screen and it uses lots of custom code to verify the output of the rendering. Lab management build is simply an Xcopy deploy of the nightly build on a physical environment where the UI test are run on the output of the nightly build.

This kind of errors are not really informative, moreover we have a strange behavior, the first run of the build usually fails, if you run it the secondo time most of the time it got green, the third time it is always ok. Since deployment scripts usually copy build output to the server, this sounds like a timeout. The first time the script runs it copies some files, then it goes in timeout, the subsequent run will finish copying the rest of the files and everything is good (copy is made with robocopy so only files that were not still copied will be copied the subsequent run).

The strange aspect is that the timeout happens after a couple of minutes of run, while the build is configured to wait for 30 minutes for deploy script to finish.

If you have such a problem, please check how you are actually deploying binaries to the server.


Figure 1: Deploy phase of a Lab Management Build

In such example, deployment is done invoking robocopy and xcopy directly from cmd /c and this could cause a problem of timeout. The bad stuff about this kind of error, is that it does seems to completely ignore timeout value for deployment script that you specify in the build. If you have this kind of deploy, I strongly suggests you to move to a script based solution (PowerShell is the best choice but a simple bat can be enough). I’ve blogged in the past on how to deploy a Web Project and a Database in a Lab Management virtual environment. My suggestion is creating a script that will be stored inside source control, then reference the build in the solution so it will be copied in drop folder during a build, and finally configure Lab Management Build to run script on the machine.

Figure 2: Deploy the build using scripts in source control that will be copied to drop location

Using the deploy script instead of using directly Cmd.exe solved the problem and the build now is able to deploy all the times without problems.

Gian Maria.

Lab management release date

Lab Management is surely one of the most exiting new feature for TFS 2010 and now we finally have a Release Date.

Since Lab Management is really a complex set of tools, even if it is in the iso images of TFS 2010 is still considered to be in “release candidate” version. Now we can announce two big news

  • RTM Bits will be released at the end of August
  • Lab managements will be avaliable even in MSDN Ultimate or TestProfessional with MSDN

The second news is very well welcomed, because it widen the number of people that could benefit from Lab Management.

If you want to test lab management you can find a preconfigured VHD (

There are also videos that can give you an idea of the product.

You can find more details here about the improvement made in RTM.


Deploy a solution and a database in a Lab Management Virtual Environment

One of the coolest feature of Tfs2010 is Lab Management, an infrastructure tool that permits you to manage Virtual Environment to test your applications. Once you have defined some template machine in SCVMM you can import them into your Lab to be used in defining Virtual Environment.


When you imported all the templates you need from SCVMM, to create a new environment you simply need to go to Environment tab, and create new virtual environment, then you can choose VM template to compose a test environment, like in the following picture where I choose three machine, a web server, a db server and a client machine.


This is the great power of Lab Management, the ability to compose virtual machines to create a test environment that permits you to run tests in a variety of different situations. Once you have deployed an environment you can view it into Lab Management, in this screenshot the environment is starting, so Workflow capability is still not ready. All machine to be used in lab environment should in fact have the necessary agent to be controllable from the lab, you can find a prep tool at this address ( that can dramatically cut down the time needed to prepare machines.


Once an environment is up and running, you can use it to do a lot of interesting operations, but the most interesting one is setup a Lab Management enabled Tfs Build, that has the purpose to compile the application and to deploy the latest version into the virtual environment and running automatic test, everything with a simple click. Defining a Lab Management build is quite simple, first of all I login into database server and IIS machine and prepares them to host my application, I created IIS sites, create the first skeleton of the database, create deploy scripts (they will be examined later), moved to the servers everything is needed to deploy script, and when everything is ready I did a snapshot to save everything. Then I create a standard Tfs Build with this only little difference: I want msbuild to create deploy packages for my web site, so I specify a couple of property to MsBuild.


Now I can create another build, this one with the LabDefaultTemplate.xaml, and when you configure the settings you can do very interesting stuff, in the first screenshot I can choose an environment where to run the build, and a valid snapshot to restore before the build take place


For all those people that does continues integration, you know the pain of maintaining scripts when the test environment can be messed up by dev/tester. I remember situation where the integration fails because a dev had stopped IIS Site to do some configuration then he never restart it again, sometimes they mess the application pool or the machine, etc etc. Having the environment restored to a clean snapshot is a real good thing, because we are sure that deploy scripts will run in a clean and tested scenario.

This is especially interesting for production upgrade, I can setup an environment that is the exact copy of production, then take a snapshot, and verifying that deploy script are able to upgrade production environment without messing things. With this approach, testers will run tests against a copy of production environment upgraded to latest version, and this can give you great confidence with the upgrade procedure.

Then you can choose the build to use to generate artifacts, you can choose a build definition and ask to queue another build, or use an existing one, or simply gets a build having a location.


Then it comes the most feared part, the deploy one Smile


Apart the fact that you can take a snapshot of the environment after the deploy succeeded (useful to having a repeatable test), you need to create deploy scripts.

Scripts are used because each environment could be isolated from other ones thanks to network fencing, so the simplest stuff to install software on an environment is: copy a batch file in one of the machine, run the batch file and let him deploy the application. This part seems to be really complex, but thanks to database project, and new installer capability of VS2010 for Web application, building such a scripts is really straightforward for web application. First of all, the Virtual Machine column listed logical name of the machines in the environment, so lab management is responsible to understand the real name of the machine, then there is the script to launch, and finally in the last column, the directory in the target machine where the script should be run. Here is an example of how to specify the script

$(BuildLocation)\Scripts\DeployDb.bat "$(BuildLocation)"

Scripts are located into the web project under the scripts directory, and are part of the solution, so they gets deployed during the build, and are available to the build location. All the scripts accepts only the build location as single argument.

Essentially each script has a first part that is common to each one, it verifies that the directory passed as argument exists, then it creates a local directory where the deploy should take place, setup some variables, and then it does the deploy, for the database we need two lines.

xcopy /c %RemotePath%\IBuySpy*.* %LocalPath%\.
C:\Setup\vsdbcmd\vsdbcmd.exe /a:Deploy /ConnectionString:"xxx" /dsp:SQL /manifest:%LocalPath%\IBuySpyDatabase.deploymanifest /p:TargetDatabase=Store /dd

It essentially copy all the database project output files into a local folder, then it run the vsdbcmd.exe command to update local database. The vsdbcmd.exe command was already copied to the server in the preparation phase. The good fact is that deploy is just a matter of a couple of lines of code, for the web application the situation is quite the same.

xcopy /c %RemotePath%\_PublishedWebsites\WebApplication5_Package\*.* %LocalPath%\.
%LocalPath%\WebApplication5.deploy.cmd /Y

The first line is used to copy the deploy package, from the build location to a local path, the script knows that the build will drop the packages in a _PublishedWebSites subdirectory, and finally it can simply launch the command file to deploy the application to IIS.

As you can see, deploying a solution into a virtual environment can seem complicated as first, but is is only a matter of a couple of lines into a bat file, for standard project. Moreover, having such a script is a good things because you can use them to automatically deploy application even outside of lab management build.

Now you can choose test to run and launch the lab management build, here is a simple result


If you look at detailed build output you can find the whole output of the scripts


Thanks to lab management, you can, with a simple click, having your latest source deployed to an environment and be ready to be used from tester. If you have good servers and space on disk, you can build as many environment as you need. As an example you can deploy a environment with two IIS machine, just to test your app in a load balancing scenario.

Managing virtual environment with lab management is really funny and productive, and can cut off dramatically the time and cost needed to test your application.


Lab management TF267042 and dns configuration

Today I configured an environment in lab management, when it finishes the deploy phase I see that the machine has testing capabilities “Error” and the details of the error is


The real error is the TF267055 

he machine is not ready to run tests because of the following error: Unable to connect to the controller on′. Reason: A connection attempt failed because the connected party did not properly respond after a period of time…… host has failed to respond

This is the configuration of this test machine.


Physical machine has a wirless in IP, that communicates with my office network, then it has the ip on a Internal Virtual Network of Hyper-V where the lab manager environment is running. Since the error states that the agent is trying to contact the controller on ip it is clear that configuration fails, because the lab machine has only the ip and has no way to connect to the 10.0.0.x network.

If I connect to the lab Machine and did a ping labrtmhost  I got a correct answer from the ip, but the problem happened during configuration of test controller. To solve this problem first of all you need to check the Active Directory DNS, and I verified that for labrtmhost machine both ip are recorded. Since the Active Directory is active only on the 10.10.1.x subnetwork there is no need to have other ip registered, so I deleted from the dns. This is not enough, because I needed to go to labrtmhost, open the configuration of the wireless network card (the one with ip) and prevent it to register ip into the dns.


Then I verified that from all machine in the network if I try to resolve labrtmhost with nslookup it gives me only the address Now I reopened the “TEst Controller Configuration Tool” and configured again the test controller from the labrtmhost machine, this time it got registered in tfs with the correct Ip. Now I repair testing capabilities on the lab management deployed machine and everything is now ok.


So, wenever you got a TF267055 error, always check the dns and dns registration and verify that the test controller is registered with the correct ip.