Completely remove Lab Management configuration in TFS

If you want to completely remove Lab Management configuration from your TFS instance, you probably know TfsConfig lab /Delete command, used to remove association between one Project Collection and SCVMM. The reasons behind the need to completely remove Lab Management configuration could be various, one of the most common is: you created a cloned copy of your TFS environment for testing purpose, and you want to be 100% sure that your cloned instance does not contact SCVMM, or you can simply have multiple Test TFS Instance and you need to move lab management from one instance to another.

Actual configuration, with PreviewCollection with Lab Management features enabled.

Figure 1: PreviewCollection has Lab Management configured.

In the above picture you can see that my PreviewCollection has Lab Management feature enabled, so I can simply run the command TfsConfig lab /Delete /CollectionName:PreviewCollection to remove this association.

TfsConfig lab /delete command in action

Figure 2: TfsConfig command in action.

When command completes you can verify that the collection has not Lab Management feature enabled anymore.

Project collection now have lab management feature disabled

Figure 3: PreviewCollection now has Lab Management feature disabled.

After running that command for all your Lab Management enabled Team Project Collections you can be disappointed because you still see SCVMM host configured in TFS Console.

Even if none of the team projection collection is configure, scvmm host is still listed

Figure 4: Even if none of the team projection collection is configure, scvmm host is still listed

This is usually not a big problem, but if you want to be 100% sure that your TFS installation does not maintain any connection to the SCVMM instance used to manage your Lab, you can use a simple PowerShell script you can find on this Blog Post. That post is related to TFS 2010, but the script it is still valid for newer TFS releases. To write this blog post I’ve used a TFS 2015 instance and everything went good.

In that post you have an alternative solution of directly updating Tfs_Configuration database, but I strongly discourage you to use that solution because you can end with a broken installation. Never manipulate Tfs databases  directly.

Lab Management is completely removed from your TFS instance

Figure 5: Lab Management is completely removed from your TFS instance

Now lab management configuration is completely removed from your TFS instance.

Gian Maria.

Nuget packages for TFS / VSO Client Object Model

Finally the Client Object Model for TFS / VSO is finally distributed with Nuget Packages, as you can read here. This is a great news especially because the Dll are finally redistributable, and your tool does not need to require a previous installation of Visual Studio or Team Explorer, or Client Object Model Package.

Another interesting fact, is that REST API are now supported for TFS 2015 / VSO (previous version of TFS does not support REST API). If you have traditional application that uses Client Object Model, you can remove all the references to the old dll and directly reference the ExtendedClient Package and you are ready to go.

 

Gian Maria.

Create a safe clone of your TFS environment

ChangeServerId to avoid confusion of client tools

Around the web there are a lot of resources about how to create a clone of your TFS environment for testing purpose. The most important step was always running the TfsCongfig ChangeServerId Command as described in the Move Team Foundation Server from your hardware configuration to another.

With the new wave of guidance for TFS 2015, a new interesting article cames out on how to do a Dry Run in a pre-production environment. In that articles a couple of tricks worth a mention, because they are really interesting and easy to do.

Risk of corrupting production environment

Tfs uses a lot of extra tools and products to fulfill it functions, it is based on Sql Server database but it communicates also with Reporting Services, Sharepoint, SCVMM for Lab management, test controllers and so on. When you restore a backup of your production environment to a clone (pre-production) environment, you need to be sure that this cloned installation does not corrupt your production environment.

As an example, if cloned server still uses the same Reporting Services instance of production server, you will probably end with a corrupted Reporting Services Database.

Protect your environment

In the above article, a couple of simple technique are described to avoid your cloned pre-production TFS corrupt something in production environment.

Edit your hosts file to make all of production servers not reachable from cloned server.

This is the simplest but most efficient trick, if you modifiy hosts file in cloned machine, giving an inexistent ip address for all the names of machines related to TFS environment, you are pretty sure that cloned environment cannot corrupt other services.

If for some reason you forgot to change Lab Management SCVMM Address or Sharepoint, cloned machine is not able to reach them, because name resolves to invalid address.

Use a different user to run TFS Service cloned environment, and be sure that this user has no special permission

Usually TFS Services runs with an account called TFService, and this account has lot of privileges in all machines related to TFS environment. As an example, it has right to manage SCVMM in a Lab Management scenario. If you create a user called TFSClonedService or TFSServiceCloned, withouth no special permission, and use that user to run cloned TFS environment, you are pretty sure that if cloned environment try to contact some external service (Ex SCVMM, Report Service, Etc) you will get a Unauthorized exception.

Remember that running a cloned TFS instance is an operation that should be done with great care, and you should adopt all techniques useful to limit accidental damage to production environment.

Gian Maria.

TFS Integration Platform, copy from Agile 2010 to CMMI 2013

Today I needed to move a bunch of Work Items from a TFS 2010  to TFS 2013, but I needed also to move from a TP based on Agile Template on a Project based on CMMI Template.

Number of Work Items is small, but lots of them have attachments, so I decided to use Integration Platform to migrate the history and attachments. It turns out that we accomplished an acceptable result with little time. An alternative, if you do not care about attachments and history, is using Excel.

First of all you need to be aware of the EnableBypassRuleDataSubmission option that allow Integration Platform to bypass rule validation of Work Items. This options is expecially useful if you migrate to a different Process Template, because you are not sure that Work Items are valid when they transition from a process to another. This feature is also useful to preserve the Author of Work Item Change in destination project. If you do not enable this feature, all changes will be recorded as done by the user that is doing the migration.

Moving to a different Process Template is mainly a matter of creating mapping between fields of the source template (Agile) to fields of detstination template (CMMI). A nice aspect is that you need to map only fields that are used on source Team Project. As a suggestion you should try to copy everything on a test Team Project in a test Project Collection, and repeat the migration several times, until the result is good.

You can start with a wildcard mapping, then start the migration and during Analysis Phase the Integration Platform will generate Conflicts that shows you what is wrong. You can solve the problem and update mapping until everything run smootly. In Figure 1 you can the most common error, a field that is present in source Process Template is not present in destination Process Template. In that picture you can verify that the Microsoft.VSTS.Common.AcceptanceCriteria is missing in CMMI Project.

image

Figure 1: Conflicts occours because not all used fields are mapped in the configuration.

Be sure to refer to the Work Item Field Reference, to verify which fields are available in destination template. One of the nicest feature of Integration Platform is that you can simply specify target field during migration, and this automatically updates the mapping. Since CMMI does not have acceptance criteria field, you can add it (editing process template of destination Team Project) or you can use another field, Es. Analysis, or you can do some complex mapping (will show an example later in the post)

You can also choose to update Mapping ignoring the field, this choose will ignore content of that field that will not be migrated. You can also do a manual update XML mapping configuration, if the resolution requires complex modification of the mapping.

Complex resolution conflicts

Figure 2: Update configuration if you need to do some complex resolution

At the end of the migration you should double check what happened, because bypassing Work Item rules usually lead to Work Items in unconsistent state. As an example, in Agile Process, a Task can have new status, while this is invalid in CMMI.

image

Figure 3: Some of migrated Work Items can have invalid state because we decided to bypass validation rules.

It is common to have same field that admit different values in different Process Template. Task Work Item type have Activity field in Agile that can be mapped to Discipline in CMMI, but allowed values are different. In such a situation you can map a field Activity to be copied to Discipline but using a lookup map to convert values.

<MappedField 
    LeftName="Microsoft.VSTS.Common.Activity"
     RightName="Microsoft.VSTS.Common.Discipline" 
   MapFromSide="Left" 
    valueMap="ActivityMap" />

<ValueMap name="ActivityMap">
    <Value LeftValue="Deployment" RightValue="Development">
        <When />
    </Value>
    <Value LeftValue="Design" RightValue="User Experience">
        <When />
    </Value>
    <Value LeftValue="Development" RightValue="Development">
        <When />
    </Value>
    <Value LeftValue="Documentation" RightValue="Analysis">
        <When />
    </Value>
    <Value LeftValue="Requirements" RightValue="Analysis">
        <When />
    </Value>
    <Value LeftValue="Testing" RightValue="Test">
        <When />
    </Value>
</ValueMap>

The last useful technique is the ability to compose destination value using multiple source fields. As an example, CMMI does not have AcceptanceCriteria field, that is present in Agile. From my point of view, Acceptance Criteria in Agile can be considered part of the Description field in CMMI. Thanks to FieldAggregationGroup I was able to to copy both Description and AcceptanceCriteria field from User Story (Agile) to field Description of Requirement (CMMI).

<FieldsAggregationGroup MapFromSide="Left" TargetFieldName="System.Description" Format="Description:{0} AcceptanceCriteria:{1}">
	<SourceField Index="0" SourceFieldName="System.Description" />
	<SourceField Index="1" SourceFieldName="Microsoft.VSTS.Common.AcceptanceCriteria" />
</FieldsAggregationGroup>

Thanks to this configuration, I can map multiple source fields in a single fields. In Figure 4 is depicted a User Story that has both Description and Acceptance Criteria populated.

Acceptance criteria and Description

Figure 4: A User Story Work Item that has both Details and Acceptance Criteria

Thanks to FieldAggregationGroup I’m able to compose content of these two field in a single field of migrated work item. Here is corresponding Work Item on Destination Team Project after migration.

Result  of composing two source fields in a single destination field

Figure 5: Result  of composing two source fields in a single destination field

Another interesting feature is specifying a default value for required fields that exists only in the destination Team Project, thanks to the @@MissingField@@ placeholder. The @@MissingField@@ placeholder can be used to simply specify a Default Value for a field in Destination Team Project.

If you need resources about Integration Platform, I suggest you looking at this article that contains a huge amount of links that cover almost every need  http://blogs.msdn.com/b/willy-peter_schaub/archive/2011/06/06/toc-tfs-integration-tools-blog-posts-and-reference-sites.aspx

Gian Maria.

Cleaning up your WSUS Server

The situation

I’ve a WSUS server that is up and running for last 2 years, it runs on a HP Proliant Microserver where I have a domain controller for my test lab. The purpose of WSUS Server is reducing download time for updates when I install test VMs. I’ve lots of Test VM, and the time needed to download updates is really great, so I’ve decided to configure a WSUS server to mitigate that problem.

Since I really do not need to do a fine grained management for a real production environment, I’ve setup my WSUS server to download almost every update, for 2 years I only approved updates without any further management, and one day my WSUS server became so slow that console crashes 90% of the time and I’m not able anymore to approve new updates.

Time to clean up!!

There are a lot of great articles in the internet on how to clean up a WSUS Server, I just want to share with you my experience to make you avoid my mistakes and have your WSUS server fully operational in the least time possible.

Move to a better hardware

My experience was: if your WSUS server is slowed down too much, there is no way you can recover it without moving to a new hardware.

To give you an extent of what I’m saying, launching standard cleanup on my WSUS Server, reached almost 20% in 24 hours, when moved on fast hardware the whole operation took 8 hours. If WSUS is virtualized, do a temporary move of the VM to a new hardware with faster cpu and SSD, but in reality you can simply move your DB to a faster hardware and you are ready to go.

The goal is: moving DB to fastest machine in your network, do maintenance, then move database back to original WSUS location. There are tons of resources on how to move your database on new hardware, this is the link I’ve followed and it is just a matter of installing SQL management studio, detatch your db, and re-attach on a new SQL Instance.

Be sure that your destination SQL Server is the Very Same Version of the one used by your original WSUS Installation

When you connect to your original WSUS Database (in Windows server 2012 R2 instance name is

\\.\pipe\Microsoft##WID\tsql\query issue a Select @@version to verify sql server version (in my situation is 2012). Since cleanup operations use at most one cpu on DB Server (at least in my situation) I moved db on an i7 2600K overclocked to 4.2 GHz with a Samsung 840 SSD Disk. During cleanup operations sqlserver.exe used only a single core, so I suggest you to move your db to a machine with poweful CPU.

Be sure that the machine with temporary SQL Server installation is joined to the domain or you will not be able to access the database from WSUS Machine.

In order to be 100% sure that WSUS machine is able to access database, configure the machine where WSUS is running as admin of temporary SQL Server.

User configuration for WSUS Server in Sql SErver.

Figure 1: Neuromancer is the computer running WSUS and is added as a login identity to SQL Server to make WSUS Service (that is running as network service) being able to reach SQL Running in another machine of the domain

Rebuild your indexes

You can find instruction at this address on how to rebuild all of your indexes of your WSUS DB. With 11 GB db the whole operation on a SSD took no more than 15 minutes. This operation is really important, because if some index is greatly fragmented, all cleanup operations will be really slow.

What to cleanup

The very first and fundamental technique is avoiding download drivers update.

There is absolutely no need to let WSUS manage drivers update, in my system, after I removed drivers updates, my total number of updates dropped from 62000 to 12000.

If you read articles in the internet, you can believe that there is no way to really delete an update from WSUS, you can only decline it, but it will remain on database. This is true if you are using the user interface with MMC, but if you use powershell you can run a little script that remove all the updates of type drivers from your system. In that article the describe a technique to remove update directly from database; I absolutely discourage you to use this technique because you can potentially destroy everything. At the end of the article you can find a simple Powershell Script that is capable of removing updates from WSUS in a supported way.

[reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration")
$wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer();
$wsus.GetUpdates() | Where {$_.UpdateClassificationTitle -eq 'Drivers'} | ForEach-Object {$wsus.DeleteUpdate($_.Id.UpdateId.ToString()); Write-Host $_.Title removed }

This script can require days to run in a standard machine. Thanks to fast CPU and SSD, my 40000 drivers where deleted in almost 9 hours. This will certify the need to temporary move DB to a really fast machine.

This script is useful also to remove updates related to removed classifications. As an example, now that Windows XP is out of support, if you do not have any Windows XP machine in your lab, you can remove XP from classification and can run the script to remove all updates that refers to windows XP. So please review all of your classifications to reduce the number of approved updates.

Finally, you can also use the script to delete all the updates that are declined and superseded by other updates . Once you finish removing all unnecessary updates, your WSUS Server should be really faster.

Final cleanup

After you removed all unnecessary updates, you should do a standard WSUS Cleanup Wizard, followed by another full rebuild of all indexes, then everything is ok.

Another suggested option is defragmenting the drive where WSUS DB is located, an operation that can be done after you stopped the instance that is using DB. If you followed my suggestion of moving your DB to a better hardware, I suggest to move the database to a drive with plenty of space in your WSUS Server.

In my situation C: Drive is 120 GB of space and has 50 GB free. After all cleanup operation are finished, I detatched DB from temporary location and decided to move the file on a 2 TB drive that has 1 TB Free. With that amount of free space, after the copy database files are completely defragmented.

Now my WSUS Server is operational again.

Gian Maria