Sonar Upgrade from 5.1 to 5.3

I really like SonarQube, but I need to admit that upgrade procedures are not the best experience I’ve had. Upgrading from 5.1 to 5.3 should be a simple task, but I had a couple of problems.

I’ve followed the official upgrade instruction: but the first problem was my connection string to SQL Server, that needs to be changed. Original connection string was


But it seems that with the new version of Solr it needs to be changed because jtds is not anymore used. The string was changed to


And the service is able to connect to my database. Then I start upgrade procedure but I got an error, and here what I found in the log

sonr Fail to execute DROP INDEX unique_schema_migrations ON schema_migrations

Luckily enough, another person got this error  and I was able to find the solution on StackOverflow Post that explained how to create the missing index. Then I restart the service, re-execute again migration procedure and everything now runs fine.

I cannot avoid to compare this experience to an upgrade of TFS on-premises, that is a fully automated procedure, a little more of next, next, next, and it is a product really more complex than SonarQube. :)

Gian Maria

BufferingAppenderSkeleton performance problem in log4net

The problem

Today I was working to Jarvis project and I got a warning from another developer telling me that MongoDbAppender for Log4net slows down the application during a full Rebuild of all projections. In that specific situation we have log level set to DEBUG and the rebuild generates 800k logs inside mongo database, so I’m expecting Mongo Appender to slow down a little bit. The problem is: using a standard FileAppender to verify difference in speed it results that the FileAppender was really faster than the MongoDbAppender.

Setting up a simple load test with a nunit test showed me that the result is true, logging 10k messages in my machine requires 2000ms with MongoDbAppender but only 97ms with a FileAppender. After some profiling with ANTS Profiler I was puzzled, because time was not spent in my code. The next step was: I created a test appender that inherits from BufferingAppenderSkeleton but does nothing (it does not log anything); with my surprise it takes almost the same time of my MongoDbAppender. Where is the bottleneck??

Investigating how Log4Net works internally

This post in Stack Overflow explains the problem. To summarize here is what happens: LoggingEvent object from Log4net has lots of volatile variables. These variables have correct value when the event is passed to appender, but the value could not be valid afterwards. If you store LoggingEvents objects (as the BufferingAppenderSkeleton does) for later use, you need to call a method called FixVolatileData on LoggingEvent to avoid reading wrong data.

To verify that this is the real reason, I simply changed the PatternLayout of the FileAppender to use one of these volatile properties the username (I know that this is one expensive variable). Here is the new layout.

“%date %username [%thread] %-5level %logger [%property{NDC}] – %message%newline”

Running again the test confirm my supposition, FileAppender execution time raised from 90ms to 1100ms, just because I asked it to include the %username property. This means that simply accessing UserName property slows down performance of a 10 factor.

This test shows an important fact: performance of various log4net appenders is related to which information you are loggin.  If you are not interested to the %username property, avoid using it because it will slow down a lot your appender. The same is true for Location property on LoggingEvent object.

Why BufferingAppenderSkeleton is so slow even if you do not log anything

Previous explanation on how FixVolatileData works gives me the explanation on why simply inheriting from BufferingAppenderSkeleton slows down the appender a lot, even if you are doing nothing in the SendBuffer function.

Since BufferingAppenderSkeleton cannot know in advance which fields of Logging Event you are going to access, it takes the most conservative approach: it fixes all the property on the LoggingEvent for every log, then stored in its internal buffer. Fixing every volatile property have the same penality of accessing them and this explain everything.

How can you solve it

The solution is super-simple, BufferingAppenderSkeleton has a Fix property that has the same meaning of the same Fix property on LoggingEvent object; it is a Flags enumeration specifying which field should be fixed. It turns out that if you avoid fixing UserName and Location property (the two most expensive properties), you gain a real boost in performances. If you are not interested in these two, but you care for speed, this can be a solution.

Here is how my BufferedMongoDBAppender initializes itself.

public override void ActivateOptions()
        if (Settings.LooseFix) 
            //default Fix value is ALL, we need to avoid fixing some values that are
            //too heavy. We skip FixFlags.UserName and FixFlags.LocationInfo
            Fix = FixFlags.Domain | FixFlags.Exception | FixFlags.Identity |
                FixFlags.Mdc | FixFlags.Message | FixFlags.Ndc |
                FixFlags.Properties | FixFlags.ThreadName;

I’ve added a simple settings called LooseFix to specify to base class that I do not want to fix nor UserName nor LocationInfo. This simple trick give a boost on performance to the appender, at the cost of renouncing to a couple of property in logging.  The name LooseFix is given because an obsolete method called FixVolatileData on LoggingEvent has a boolean parameter called fastButLoose, to avoid fixing properties that are slow to fix.

Here is what the documentation says about this parameter

The fastButLoose param controls the data that is fixed. Some of the data that can be fixed takes a long time to generate, therefore if you do not require those settings to be fixed they can be ignored by setting thefastButLoose param to true. This setting will ignore the LocationInformation and UserName settings.

This piece of documentation confirms me that the two most expensive variable to fix are UserName and LocationInformation.


I’ve added also another little improvement to maximize performances: saving data on a dedicated thread to avoid blocking the caller when the buffer is full. When BufferingAppenderSkeleton buffer is full it calls a virtual function synchronously with logger call; this means that the call that trigger buffer flushing needs to wait for all buffer objects to be saved into mongo. Using a different thread can avoid slowing down the caller, but you need to be really careful because you can waste lots of memory with LoggingEvents object waiting to be saved on Mongo.

At the end, here the result of 10k logging with some simple timing.

With MongoAppender - Before flush 2152 ms
With MongoAppender - Iteration took 2158 ms
With MongoAppender Loose Fix - Before flush 337 ms
With MongoAppender Loose Fix - Iteration took 343 ms
With MongoAppender save on different thread - Before flush 1901 ms
With MongoAppender save on different thread - Iteration took 2001 ms
With MongoAppender Loose fix and save on different thread - Before flush 37 ms
With MongoAppender Loose fix and  save on different thread - Iteration took 379 ms

The test is really simple: I log 10k times and takes timing, then call flush to wait for all the logs to be flushed and takes another timing. The first two lines shows me that standard buffered mongo appender is quite slow, it takes 2 seconds to handle all the logs. With LooseFix to true, we are renouncing to UserName and LocationInfo property, but the boost in performance is impressive, again we have no difference before and after the flush.

The third copule of lines shows that saving on a different thread does not save great time if you fix all the properties, but last two lines shows that saving in different thread with LooseFix to true only needs 37 ms to logs (the appender is storing in memory all the logs to be saved on different thread), but you need a little bit more time when it is time to flush.

To avoid hammering memory if you are logging an enormous quantity of logs, probably it is a good idea setting a limit on how many events to store in memory before triggering an internal flush and wait for the saving thread to reduce the buffer. Actually if I have more than 10k LoggingObjects in memory I block the caller until dedicated thread saved at least an entire buffer;

Gian Maria.

How to Deploy Web Site with PowerShell DSC

I do not want to create another tutorial on DSC and I suggest you reading some introductory articles like: Introducing PowerShell Desired State Configuration before reading this article. Since I’m pretty new with PowerShell and I’m starting experimenting with DSC I decided to start creating a script to deploy my favorite test application (TailspinToys :) ) on a single Windows 2012 R2 server using only DSC. This post aims to share my thought on the subject.

I was able to complete the script, even if I encountered some difficulties and I manage to automate almost everything, except the installation of Sql Server 2012 (I’m working on it). The goal is being able to deploy an application that uses a SQL server database written in Asp.Net 4.5 to a Windows Server with a fresh install, using only DSC Goodness.

First of all I warn you that most of the resources I needed to deploy my site are not available in basic distribution of PoweShell and should be downloaded from Microsoft. To download all the resources in a single package there is a single page in MSDN to download the entire DSC Resource Kit.

After you downloaded the resource kit you should care about a couple of important points, the first one is that these resources are not production ready and they are all experimental. This is the reason why all these resources starts with an x. So do not expect any official program to support them, if you have problem you should ask people in the forum and you will found solution. The other aspect is: if you, like me, appreciate the push model, you need to install all of these modules to all target servers. This violates in a certain way my requirement of being able to install in a clean server, because the server is not really “clean” if you need to have DSC resources deployed on it. This problem will be mitigated with WMF 5.0 that introduces the concept of PowerShellGet to automatically discover, install and update Powershell Modules, so it is really a no-problem.

Once everything is in place, I started creating the script, the first part is the standard one you can find in every PowerShell DSC related article, plus some import instructions to import all the DSC resources I want to use in the package.

Configuration TailspinToys
  Import-DscResource -Module xWebAdministration
  Import-DscResource -Module xNetworking
  Import-DscResource -Module xSqlPs
  Import-DscResource -Module xDatabase
  Import-DscResource -Module NTFSPermission

  Node $AllNodes.NodeName 

    #Install the IIS Role 
    WindowsFeature IIS 
      Ensure = “Present” 
      Name = “Web-Server” 

    # Required for SQL Server 
    WindowsFeature installdotNet35 
        Ensure = "Present" 
        Name = "Net-Framework-Core" 
        Source = "\\neuromancer\Share\Sources_sxs\?Win2012R2" 

    #Install ASP.NET 4.5 
    WindowsFeature ASP 
      Ensure = “Present” 
      Name = “Web-Asp-Net45” 

In the beginning of the script the Import-DscResource allow me to import the various resources I’ve installed, and NTFS Permission resource is taken from an article on VexaSoft site; many thanks to the author for authoring this module. That article is really useful because it shows how easy is create a resource for DSC in the situation where there is nothing already pre-made to obtain your purpose.

I use a configuration resource and the special name $AllNodes will contain the name of the single server I want to use for the installation. The above part of the scripts takes care of all of the prerequisites of my TailspinToys application. I’m installing .NET 3.5 because it is needed for Sql Server installation, but sadly enough I was not able to make the xSqlServerInstall works, to automatically install Sql Server (Actually it asks me to reboot and even rebooting the DSC scripts stops to run). I’ve decided to install Sql Server manually and wait for a better and more stable version of xSqlServerInstall. Then I request IIS and 4.5.

Running the above script with the right configuration data produces a mof file that can be used to actually configure the target. Here is the configuration I’,m using.

$ConfigurationData = @{
    AllNodes = @(
            SourceDir = "\\neuromancer\Drops\TailspinToys_CD_WebTest1\TailspinToys_CD_WebTest1_20140213.1\"
            RebootNodeIfNeeded = $true

I need the name of the server and the source directory where I stored the distribution of my WebSite. In this example I’m using a standard Drop Folder of a TFS Build, so I have my binaries indexed with my symbol server. The creation of the mof file is simply triggered calling the new defined function TailspinToys passing the configuration above..

TailspinToys -ConfigurationData $ConfigurationData 

Now I have a mof file that contains everything I need to create the deploy, and I can push configuration to desired nodes with:

Start-DscConfiguration -Path .\TailspinToys -Wait -Verbose

This will start configuration, connect to all the nodes (in this example the single machine WebTest2) and “make it so”, moving the state of the nodes to desired state. The cool part of DSC is that you specify the state you desire on the target nodes, without taking care on how this state will be achieved, this is done by the various resources. Another interesting aspect is, if a resource is already in desired state, the Start-DscConfiguration will do nothing. When you run the above script the first time it needs a little bit time, because it will install IIS, but if IIS is already installed in target node, nothing happens.

With few lines of PowerShell I was able to install IIS and Asp.NET 4.5 plus .NET 3.5 to my machines.

In the next article I’ll deal on how to deploy the website bits.

Gian Maria.

NDepend 4 and CQLinq

You already know that I’m a fan of NDepend because it is a really useful tool to have a deep insight on your code and especially to spot out troublesome areas in your project.

With version 4 it added a really cool capability called CQLinq or Code Query Linq; an amazing feature that permits you to query your code using LINQ stile queries. Basically after I’ve analyzed a solution I got presented with a simple dialog that asked me what I’m interested to do primarily with the result of the analysis.


This feature is really amazing because you can query your code to find almost everything. It would be really long to show you every capability of CQLinq, you can read about it in official documentation, but I want to give you just a taste of what you can achieve with it.


Here I’m simply selecting all the methods that contains more than 30 lines of code and in the select part I’m interested in the number of comment lines. Long methods are painful, but if you find a method of 40 lines with 30 lines of comment you are in trouble, because if a developer put so much comment in long method, surely there is some weird logic in it.

CQLinq is full of interesting features, as an example if you select from JustMyCode.Methods you are actually selecting only your code, and not code generated from UI Designer. This will permit you to focus primary on your code and avoiding to have the result of the query polluted by Designer generated code.

NDepend designer fully support intellisense and this really helps you to achieve the result in really little time, the result is immediately shown below the query as soon as the query compiles, and if you made some mistake you are immediately presented with a simple list of compiling errors that makes you understand what is wrong with the query.


As usual I think that NDepend is a must-to-have tool especially if you need to find problem in legacy code. I strongly suggest you to check out all the possibility on official documentation page, to have an idea of the capability of the tool.

You can also download a trial of the product to try out by yourself in your project.

Gian Maria.

Git is fantastic but IMHO too complex to use

I’ve blogged some time ago on the basic fact that I’m not a Git lover and a couple of days ago I stumbled upon this post (10 things I hate about Git), where the author did a full explanation of various reason why he does not like Git.

If you have read in the past the excellent book by David Platt Why Software Sucks you should agree that, while Git is surely a really good, fast, powerful, complete source control system, that permits you to do marvelous stuff, it lacks simplicity. And although it is a tool for programmer I think that it is too difficult to use for the basic everyday operations. In my opinion Git is a perfect example of a tool that was built to make complex thing possible but not to make simple things simpler (as suggested in David Platt’s book).

The result is a tool that surely is the most complete and powerful source control system you can use today, but that requires a lot of training to avoid to get lost with it. I hope that in the future some effort will be devoted to make everyday work simpler and clearer, avoiding the need to face complexity in everyday work, leaving the complexity only when it is time to do complex things. This will really make Git an exceptional and outstanding product.

Gian Maria.