Team Project Rename

Renaming of Team Projects is one of the most requested feature on Visual Studio User Voice:


Figure 1: Team Project rename suggestion on User Voice

Today, Brian Harry announced new Visual Studio Online Update and it contains this feature, even if it will not be available to everyone at the beginning. For those people that uses TFS on-premise you will have this feature in the upcoming new version: TFS 2015.

I strongly suggests you to have a look at Visual Studio User Voice and give your suggestion, because it is in the radar of Visual Studio Team.

Gian Maria.

BufferingAppenderSkeleton performance problem in log4net

The problem

Today I was working to Jarvis project and I got a warning from another developer telling me that MongoDbAppender for Log4net slows down the application during a full Rebuild of all projections. In that specific situation we have log level set to DEBUG and the rebuild generates 800k logs inside mongo database, so I’m expecting Mongo Appender to slow down a little bit. The problem is: using a standard FileAppender to verify difference in speed it results that the FileAppender was really faster than the MongoDbAppender.

Setting up a simple load test with a nunit test showed me that the result is true, logging 10k messages in my machine requires 2000ms with MongoDbAppender but only 97ms with a FileAppender. After some profiling with ANTS Profiler I was puzzled, because time was not spent in my code. The next step was: I created a test appender that inherits from BufferingAppenderSkeleton but does nothing (it does not log anything); with my surprise it takes almost the same time of my MongoDbAppender. Where is the bottleneck??

Investigating how Log4Net works internally

This post in Stack Overflow explains the problem. To summarize here is what happens: LoggingEvent object from Log4net has lots of volatile variables. These variables have correct value when the event is passed to appender, but the value could not be valid afterwards. If you store LoggingEvents objects (as the BufferingAppenderSkeleton does) for later use, you need to call a method called FixVolatileData on LoggingEvent to avoid reading wrong data.

To verify that this is the real reason, I simply changed the PatternLayout of the FileAppender to use one of these volatile properties the username (I know that this is one expensive variable). Here is the new layout.

“%date %username [%thread] %-5level %logger [%property{NDC}] – %message%newline”

Running again the test confirm my supposition, FileAppender execution time raised from 90ms to 1100ms, just because I asked it to include the %username property. This means that simply accessing UserName property slows down performance of a 10 factor.

This test shows an important fact: performance of various log4net appenders is related to which information you are loggin.  If you are not interested to the %username property, avoid using it because it will slow down a lot your appender. The same is true for Location property on LoggingEvent object.

Why BufferingAppenderSkeleton is so slow even if you do not log anything

Previous explanation on how FixVolatileData works gives me the explanation on why simply inheriting from BufferingAppenderSkeleton slows down the appender a lot, even if you are doing nothing in the SendBuffer function.

Since BufferingAppenderSkeleton cannot know in advance which fields of Logging Event you are going to access, it takes the most conservative approach: it fixes all the property on the LoggingEvent for every log, then stored in its internal buffer. Fixing every volatile property have the same penality of accessing them and this explain everything.

How can you solve it

The solution is super-simple, BufferingAppenderSkeleton has a Fix property that has the same meaning of the same Fix property on LoggingEvent object; it is a Flags enumeration specifying which field should be fixed. It turns out that if you avoid fixing UserName and Location property (the two most expensive properties), you gain a real boost in performances. If you are not interested in these two, but you care for speed, this can be a solution.

Here is how my BufferedMongoDBAppender initializes itself.

public override void ActivateOptions()
        if (Settings.LooseFix) 
            //default Fix value is ALL, we need to avoid fixing some values that are
            //too heavy. We skip FixFlags.UserName and FixFlags.LocationInfo
            Fix = FixFlags.Domain | FixFlags.Exception | FixFlags.Identity |
                FixFlags.Mdc | FixFlags.Message | FixFlags.Ndc |
                FixFlags.Properties | FixFlags.ThreadName;

I’ve added a simple settings called LooseFix to specify to base class that I do not want to fix nor UserName nor LocationInfo. This simple trick give a boost on performance to the appender, at the cost of renouncing to a couple of property in logging.  The name LooseFix is given because an obsolete method called FixVolatileData on LoggingEvent has a boolean parameter called fastButLoose, to avoid fixing properties that are slow to fix.

Here is what the documentation says about this parameter

The fastButLoose param controls the data that is fixed. Some of the data that can be fixed takes a long time to generate, therefore if you do not require those settings to be fixed they can be ignored by setting thefastButLoose param to true. This setting will ignore the LocationInformation and UserName settings.

This piece of documentation confirms me that the two most expensive variable to fix are UserName and LocationInformation.


I’ve added also another little improvement to maximize performances: saving data on a dedicated thread to avoid blocking the caller when the buffer is full. When BufferingAppenderSkeleton buffer is full it calls a virtual function synchronously with logger call; this means that the call that trigger buffer flushing needs to wait for all buffer objects to be saved into mongo. Using a different thread can avoid slowing down the caller, but you need to be really careful because you can waste lots of memory with LoggingEvents object waiting to be saved on Mongo.

At the end, here the result of 10k logging with some simple timing.

With MongoAppender - Before flush 2152 ms
With MongoAppender - Iteration took 2158 ms
With MongoAppender Loose Fix - Before flush 337 ms
With MongoAppender Loose Fix - Iteration took 343 ms
With MongoAppender save on different thread - Before flush 1901 ms
With MongoAppender save on different thread - Iteration took 2001 ms
With MongoAppender Loose fix and save on different thread - Before flush 37 ms
With MongoAppender Loose fix and  save on different thread - Iteration took 379 ms

The test is really simple: I log 10k times and takes timing, then call flush to wait for all the logs to be flushed and takes another timing. The first two lines shows me that standard buffered mongo appender is quite slow, it takes 2 seconds to handle all the logs. With LooseFix to true, we are renouncing to UserName and LocationInfo property, but the boost in performance is impressive, again we have no difference before and after the flush.

The third copule of lines shows that saving on a different thread does not save great time if you fix all the properties, but last two lines shows that saving in different thread with LooseFix to true only needs 37 ms to logs (the appender is storing in memory all the logs to be saved on different thread), but you need a little bit more time when it is time to flush.

To avoid hammering memory if you are logging an enormous quantity of logs, probably it is a good idea setting a limit on how many events to store in memory before triggering an internal flush and wait for the saving thread to reduce the buffer. Actually if I have more than 10k LoggingObjects in memory I block the caller until dedicated thread saved at least an entire buffer;

Gian Maria.

Running NUnit Tests in a TFS 2015 Build vNext


In TFS2015 CTP you can have a preview of the new build system that will be available with the next version of Team Foundation Server. If you start scheduling a new build that has NUnit test you probably will notice that your unit tests are not executed during the build.

To execute NUnit test in a vNext build you should ensure that appropriate tests adapters are available to agents that will run the build

The easiest way to make NUnit adapters available is downloading them from Visual Studio Gallery, unzip the .vsix extension file, then copy all the extension file in the folder.

C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\Extensions

This operation should be done in every machine where you deployed Test Agents that will be used to run build with nunit tests. This can be annoying if you have lots of agents, if you prefer, Test Runner Task has an options to specify the path where the agent can find needed test runners.

The easiest way to automate all the process is adding Nunit Test Adapter nuget package to your test project, adding a standard Nuget Reference.


Figure 1: Add Nuget Package for NUnit Test Adapter

Once you’ve added the package, you should see all needed assemblies under the package directory of your solution. Now you can specify the location that contains the Nunit Test Adapter directly from Build Definition using this string


Please not the use of quotes (“) and the use of the $(Build.SourceDirectory) macro to specify the location where the build is taking place.


Figure 2: Specify path of custom Test Adapter inside build definition.

Pro and con of the two approaches:

Copying adapters inside Visual Studio TestWindows folder

Pro: You do not need to specify path of the adapters inside each build definition

Cons: You should do this for every machine where agent is deployed.

Specify Path to Custom Test Adapter with nunit packages

Pro: You can use the version you need referencing different nuget packages. You should not have access to machines where agent is installed.

Cons: You need to remember to use that settings inside each build where you want to run NUnit tests. All Test Adapters should be placed inside the same directory.

Gian Maria.

Unable to create workspace after TFS upgrade, a workspace already exists

At a customer site we performed an upgrade from TFS 2010 to TFS 2013, moving from a computer in Workspace to a computer in Domain. With this operation we finally defined a DNS name for tfs so all user can access it from name, instead of using machine name. After we performed the upgrade, all the user were warned of this change and we simply told them to recreate workspaces pointing to the new server.

After the upgrade some of the users were not able to create a workspace in the same location of the old workspace, Visual Studio complains because a workspace already exists in the same location.

Using tf workspaces or tf workspace commands did not solve the problem, even if we delete all workspaces associated to the user, when he try to recreate the workspace he get an error because a workspace already existed at that location.

After a brief investigation we found the reason. TFS instasnce was migrated from a workspace to a domain and some of the user in domain have the same name they have in workspace machine. Upgrading a TFS with backup and restore preserves everything, migrated TFS instance still contains old workspaces. When one of the user try to create a new workspace in the same location on Disk, TFS complains because the old workspace is still there, but if we try to delete the workspace with the tf workspaces command it failed because it uses the new users, that is different, even if the name is identical.

Whenever you have this kind of problem, the easiest solution is to download and install TFS Sidekicks, connect to TFS with an account that has administrative privilege, and verify all the workspaces that are configured in the system.


Figure 1: Tfs Sidekicks in action, showing all workspaces defined in a Project Collection

Thanks to this tools, we verified that even if we deleted old workspaces with command line, they are still there because the user, despite having the same name, are different. Deleting old workspaces from Sideckicks resolved the problem and users were able to recreate all workspaces.

If you have problems with workspaces, sidekicks is a really useful tool because it gives you a nice GUI tool to list, check, delete and manage all workspaces defined in your TFS.

Gian Maria.

User added to Team Project have no permission after upgrade from TFS2010 to TFS2013

I’ve performed an upgrade from TFS2010 to TFS2013 at a customer site last week. The upgrade consisted in moving to a different machine and from a Workstation to an Active Directory Domain. The operation was simple, because the customer uses only Source Control and they want to spent minimal time in the operation, so we decided for this strategy

  1. Stop TFS in the old machine
  2. Backup and restore db in the new machine
  3. Upgrade and verify that everything works correctly

They do not care about user remapping, or reporting services or other stuff, they just want to do a quick  migration to new version to use local workspaces new feature (introduced with TFS 2012). The do not care to remap old user to new user, they only care not to spend too much time in the upgrade.

The upgrade went smoothly, but we start facing a couple of problem. The first one is: after the migration, each team project has no user, because the machine is now joined to a domain with different users, but if we add users to a team project, they are not able to connect to team project, and they seems to have no permission. All the users that are Project collection Administrators can use TFS with no problem.

The reason is simple, in TFS2012 the concept of Teams was introduced in the product. Each Team Project can have multiple Teams and when you add users from the home page of the Team, you are actually adding people to a TFS Group that correspond to that Team. For each Team Project a default Team with the same name of the Team Project is automatically created.


Figure 1: Users added to Team through home page.

In the above picture, I’ve added two user to the BuildExperiments Team, we can verify this in the Settings page of the Team Project.


Figure 2: User added through the home page, are added to the corresponding Tfs Security Group

To understand the permission of that users, you should use the administration section of TFS, as you can see from Figure 3, BuildExperiments team has no permission associated.


Figure 3: Permission associated to the Team Group

The reason for this is: the Team is not part of the Contributors TFS Group, it can be verified from the Member Of part of group properties


Figure 4: Team group belongs only to the Project Valid User

When you create a new Team Project, the default team (with the same name of the Team Project) is automatically added to the Contributors group, it is that team that gives user the right to access the Team Project. To fix the above problem you can manually add the Team Tfs Group to the Contributors group using the Join Group button. Once the Team group is added to the Contributors group, all the people you add with web interface are now able to access the Team Project.

This behavior is the standard in TFS, if you create a new Team, the Ui suggests you to choose to add the new Team Group to an existing group to inherit permission.


Team 5: Default option for a new group is to be part of the Contributors group.

This is an optional choice, you can choose a different security group or you can choose no group, but you should then remember to explicitly add permission to the corresponding Team Group.

When people does not access TFS and you believe that they should, always double check all the groups they belong and the effective permissions associated to them.

Gian Maria.