BufferingAppenderSkeleton performance problem in log4net

The problem

Today I was working to Jarvis project and I got a warning from another developer telling me that MongoDbAppender for Log4net slows down the application during a full Rebuild of all projections. In that specific situation we have log level set to DEBUG and the rebuild generates 800k logs inside mongo database, so I’m expecting Mongo Appender to slow down a little bit. The problem is: using a standard FileAppender to verify difference in speed it results that the FileAppender was really faster than the MongoDbAppender.

Setting up a simple load test with a nunit test showed me that the result is true, logging 10k messages in my machine requires 2000ms with MongoDbAppender but only 97ms with a FileAppender. After some profiling with ANTS Profiler I was puzzled, because time was not spent in my code. The next step was: I created a test appender that inherits from BufferingAppenderSkeleton but does nothing (it does not log anything); with my surprise it takes almost the same time of my MongoDbAppender. Where is the bottleneck??

Investigating how Log4Net works internally

This post in Stack Overflow explains the problem. To summarize here is what happens: LoggingEvent object from Log4net has lots of volatile variables. These variables have correct value when the event is passed to appender, but the value could not be valid afterwards. If you store LoggingEvents objects (as the BufferingAppenderSkeleton does) for later use, you need to call a method called FixVolatileData on LoggingEvent to avoid reading wrong data.

To verify that this is the real reason, I simply changed the PatternLayout of the FileAppender to use one of these volatile properties the username (I know that this is one expensive variable). Here is the new layout.

“%date %username [%thread] %-5level %logger [%property{NDC}] – %message%newline”

Running again the test confirm my supposition, FileAppender execution time raised from 90ms to 1100ms, just because I asked it to include the %username property. This means that simply accessing UserName property slows down performance of a 10 factor.

This test shows an important fact: performance of various log4net appenders is related to which information you are loggin.  If you are not interested to the %username property, avoid using it because it will slow down a lot your appender. The same is true for Location property on LoggingEvent object.

Why BufferingAppenderSkeleton is so slow even if you do not log anything

Previous explanation on how FixVolatileData works gives me the explanation on why simply inheriting from BufferingAppenderSkeleton slows down the appender a lot, even if you are doing nothing in the SendBuffer function.

Since BufferingAppenderSkeleton cannot know in advance which fields of Logging Event you are going to access, it takes the most conservative approach: it fixes all the property on the LoggingEvent for every log, then stored in its internal buffer. Fixing every volatile property have the same penality of accessing them and this explain everything.

How can you solve it

The solution is super-simple, BufferingAppenderSkeleton has a Fix property that has the same meaning of the same Fix property on LoggingEvent object; it is a Flags enumeration specifying which field should be fixed. It turns out that if you avoid fixing UserName and Location property (the two most expensive properties), you gain a real boost in performances. If you are not interested in these two, but you care for speed, this can be a solution.

Here is how my BufferedMongoDBAppender initializes itself.

public override void ActivateOptions()
        if (Settings.LooseFix) 
            //default Fix value is ALL, we need to avoid fixing some values that are
            //too heavy. We skip FixFlags.UserName and FixFlags.LocationInfo
            Fix = FixFlags.Domain | FixFlags.Exception | FixFlags.Identity |
                FixFlags.Mdc | FixFlags.Message | FixFlags.Ndc |
                FixFlags.Properties | FixFlags.ThreadName;

I’ve added a simple settings called LooseFix to specify to base class that I do not want to fix nor UserName nor LocationInfo. This simple trick give a boost on performance to the appender, at the cost of renouncing to a couple of property in logging.  The name LooseFix is given because an obsolete method called FixVolatileData on LoggingEvent has a boolean parameter called fastButLoose, to avoid fixing properties that are slow to fix.

Here is what the documentation says about this parameter

The fastButLoose param controls the data that is fixed. Some of the data that can be fixed takes a long time to generate, therefore if you do not require those settings to be fixed they can be ignored by setting thefastButLoose param to true. This setting will ignore the LocationInformation and UserName settings.

This piece of documentation confirms me that the two most expensive variable to fix are UserName and LocationInformation.


I’ve added also another little improvement to maximize performances: saving data on a dedicated thread to avoid blocking the caller when the buffer is full. When BufferingAppenderSkeleton buffer is full it calls a virtual function synchronously with logger call; this means that the call that trigger buffer flushing needs to wait for all buffer objects to be saved into mongo. Using a different thread can avoid slowing down the caller, but you need to be really careful because you can waste lots of memory with LoggingEvents object waiting to be saved on Mongo.

At the end, here the result of 10k logging with some simple timing.

With MongoAppender - Before flush 2152 ms
With MongoAppender - Iteration took 2158 ms
With MongoAppender Loose Fix - Before flush 337 ms
With MongoAppender Loose Fix - Iteration took 343 ms
With MongoAppender save on different thread - Before flush 1901 ms
With MongoAppender save on different thread - Iteration took 2001 ms
With MongoAppender Loose fix and save on different thread - Before flush 37 ms
With MongoAppender Loose fix and  save on different thread - Iteration took 379 ms

The test is really simple: I log 10k times and takes timing, then call flush to wait for all the logs to be flushed and takes another timing. The first two lines shows me that standard buffered mongo appender is quite slow, it takes 2 seconds to handle all the logs. With LooseFix to true, we are renouncing to UserName and LocationInfo property, but the boost in performance is impressive, again we have no difference before and after the flush.

The third copule of lines shows that saving on a different thread does not save great time if you fix all the properties, but last two lines shows that saving in different thread with LooseFix to true only needs 37 ms to logs (the appender is storing in memory all the logs to be saved on different thread), but you need a little bit more time when it is time to flush.

To avoid hammering memory if you are logging an enormous quantity of logs, probably it is a good idea setting a limit on how many events to store in memory before triggering an internal flush and wait for the saving thread to reduce the buffer. Actually if I have more than 10k LoggingObjects in memory I block the caller until dedicated thread saved at least an entire buffer;

Gian Maria.

Use Capped collection on MongoAppender for Log4Net

One of the coolest feature of Mongo is the concept of Capped Collection, or “fixed size” collection. They are based on a FIFO queue where the first record to be discharded is the first inserted, and this is exceptional to create a log-collection that automatically purge all old logs without any user intervention.

To be able to automatically enable this feature on the Log4Net Mongo appender you need to do a little modification to the code, this is because the original code simply gets a reference to the collection with this code.

connection = MongoServer.Create(mongoConnectionString.ToString());
var db = connection.GetDatabase(DatabaseName);
collection = db.GetCollection(CollectionName);

C# drivers for Mongo automatically creates a collection if it is not present, this means that when you call db.GetCollection if the collection is not present it will be automatically created, but it is not capped. To solve this problem you can modify the initialization code with this code.

if (!db.CollectionExists(collectionName)) { 
    var options =  CollectionOptions
    db.CreateCollection(collectionName, options);
collection = db.GetCollection(CollectionName);

MongoDb C# drivers has a class called CollectionOptions used to setup options to create a new MongoCollection and it can be accessed with a really easy Fluent-Interface, in my example I call SetCapped(true) to enable a capped collection and SetMaxSize() to setup the maximum size in bytes. The size of the capped-collection is stored in the appender property called CappedSize, the default is 500MB, but you can setup any size you likes in standard log4Net configuration.

Gian Maria.

Using Mongo Database to store Log4Net logs

One of the most simple and useful way to introduce a Documents Database like Mongo in your organization is to use as Log Storage. If you use log4Net you can download a Mongo appender capable of storing logs inside mongo with few lines of code and you are ready to go.

The original appender is really good but I’ve done a little modification to make it capable of storing complex item in extended properties, just locate the function LogginEventToBSON and modify the section that stores Composite Properties with following code

if (compositeProperties != null && compositeProperties.Count > 0)
    var properties = new BsonDocument();
    foreach (DictionaryEntry entry in compositeProperties)
        BsonValue value;
        if (!BsonTypeMapper.TryMapToBsonValue(entry.Value, out value))
            properties[entry.Key.ToString()] = entry.Value.ToBsonDocument();
            properties[entry.Key.ToString()] = value;
    toReturn["customproperties"] = properties;

The only modification I did to the original code is trying to convert the value stored in Composite properties to a BsonValue and if the conversion does not succeeds I convert the entire object to a BsonDocument and store the entire object to Mongo. The original code stores extended properties with ToString(), this has two disadvantages, the first is that you are actually losing the type of the data and you are not able to store complex objects. With this modification if you add an Int32 as extended property, it will be stored as numeric value inside the database.

Using Mongo as Log Storage gives you lots of advantages, first of all Document Databases are really fast in inserting stuff, then they are Not Schema Bound so you do not need to define a schema to contain your logs. This feature in conjunction with the ability of Log4Net to store extended properties in the context permits you to write code like this one.

log4net.GlobalContext.Properties["CurrentItem"] = item;

With this simple line of code each subsequent log will have in Mongo Database an associated object of type item, until you will not remove the object from the GlobalContext.

Complex objects are stored inside the log

Figure 1: Your log contains all data from extended properties

As you can see from Figure 1 the log has now a complex object associated with it. This capability is awesome because you simply need to store complex objects inside GlobalContext and they will be stored inside the log as customproperties without needing additional lines of code.

Clearly you can now use all the advanced query capability of Mongo to view/filter/mapreduce logs using these additional properties.

Gian Maria.

Managing remote logging with Log4net and WCF

I’ve applications that can work in two distinct configuration, they are based on WPF and MVVM, where the VM communicates with the Domain / Business Logic through services like IXxxxService. All View Models depends from one or more services and thanks to Castle I can decide with configuration file which concrete class to use for each service.

When the software is used internally, it has direct access to the Database, so I configure castle to use the concrete class of the various services, but when the software is deployed to external users,  who have no access to the intranet, all communication is done through WCF. This is done transparently because I have a facility that resolve the various IXxxxService with WCF Proxy classes.

All software logs a lot, to be able to diagnostic errors as soon as possible, and I have a IManagement service that has a method dedicated to logging.

   1: [OperationContract(IsOneWay=true, ProtectionLevel=ProtectionLevel.EncryptAndSign)]

   2: void Log(LogDto log);

The implementation of this method is really really simple.

   1: public void Log(Dto.LogDto log)

   2: {

   3:     log.Log(Logger);

   4: }

The LogDto has a method Log, that is able to log everything to an ILogger Castle standard logger that in turns uses Log4Net, so remote program can send log through WCF.

My problem is that the various ViewModels are using classes from a shared library and those classes uses Log4Net internally to log stuff. If the software is deployed in internal network I have no problem because I can redirect logging in a log database, but when the software is used by external user how can I send all those log to the server?

The solution is simple I need to create a custom Log4Net appender to intercept Log4Net logs and redirect them to IManagementService.

   1: class ManagementLogAppender : AppenderSkeleton

   2:  {

   3:      private IManagementService _managementService;



   6:      public ManagementLogAppender(IManagementService managementService)

   7:      {

   8:          _managementService = managementService;

   9:      }


  11:      protected override void Append(log4net.Core.LoggingEvent loggingEvent)

  12:      {

  13:          LogDto logDto = new LogDto();

  14:          logDto.Message = loggingEvent.MessageObject.ToString();

  15:          if (loggingEvent.ExceptionObject != null)

  16:          {

  17:              logDto.FullExceptionData = loggingEvent.ExceptionObject.ToString();

  18:          }

  19:          logDto.LoggerName = loggingEvent.LoggerName;

  20:          _managementService.Log(logDto);

  21:      }

  22:  }

This appender simply send the log to the IManagementService so I have a centralized point where all the remote logging takes place. Since I have more than one custom appender, I usually register all of them inside Castle Windsor Configuration and add all registered one with this simple snippet of code, that is run from the bootstrapper.

   1: var allAppender = IoC.ResolveAll<IAppender>();

   2: Hierarchy repository = LogManager.GetRepository() as Hierarchy;

   3: foreach (IAppender appender in allAppender)

   4: {

   5:     repository.Root.AddAppender(appender);

   6: }

   7: repository.RaiseConfigurationChanged(EventArgs.Empty);

This permits me to have appenders that can declare dependencies, like the ManagementLogAppender that is depending from IManagementService. Thanks to the same facility I can use the concrete class in intranet (direct use of log4net) or the dynamic generated proxy that send log through WCF.

Gian Maria.

Intercept Log4Net message in a Windows application

Scenario: I have some service that runs in the background, but I want the user to be able to launch interactively with a  windows form application and I want to intercept all log4net messages issued by the various components that works in the background.

Thanks to the supereasy way to write an appender in log4net, obtaining this result is really simple..

   1: public class Log4NetLogEventSourceAppender : AppenderSkeleton

   2:    {

   3:        private Object _syncRoot;


   5:        public Log4NetLogEventSourceAppender()

   6:        {

   7:            _syncRoot = new object();

   8:        }


  10:        /// <summary>

  11:        /// Occurs when [on log].

  12:        /// </summary>

  13:        public static event EventHandler<OnLog4NetLogEventArgs> OnLog;


  15:        protected override void Append(LoggingEvent loggingEvent)

  16:        {

  17:            EventHandler<OnLog4NetLogEventArgs> temp = OnLog;

  18:            if (temp != null)

  19:            {

  20:                lock (_syncRoot)

  21:                {

  22:                    temp(null, new OnLog4NetLogEventArgs(loggingEvent));

  23:                }

  24:            }

  25:        }


  27:    }


  29:    public class OnLog4NetLogEventArgs : EventArgs

  30:    {

  31:        public LoggingEvent LoggingEvent { get; private set; }


  33:        public OnLog4NetLogEventArgs(LoggingEvent loggingEvent)

  34:        {

  35:            LoggingEvent = loggingEvent;

  36:        }

  37:    }

This is a quick and dirty approach, a _syncRoot object will serialize all events because this is a shared event I want to prevent multiple threads to messup logging. Thanks to the shared event you can simply add this appender to the list of enabled appender.

   1: <appender

   2:     name="Interceptor"

   3:     type="xxx.Log4NetLogEventSourceAppender, xxx">

   4:     <layout type="log4net.Layout.PatternLayout">

   5:       <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline %property{analyzing_url}" />

   6:     </layout>

   7:   </appender>

Now in code you should simply subscribe to appender event.

   1: Log4NetLogEventSourceAppender.OnLog += Log4NetLogEventSourceAppender_OnLog;

And do something useful in the event handler, in my program I have a custom MVP pattern, and I simply add the log the to the view (that in turn shows a grid with all the logs with filtering and sorting capabilities).

   1: void Log4NetLogEventSourceAppender_OnLog(object sender, OnLog4NetLogEventArgs e)

   2: {

   3:      View.AppendLog(e.LoggingEvent);

   4: }

Thanks to the extremely flexibility of log4net it is possible to manage logs in a great variety of locations, and with this simple interceptor you can propagate logging wherever you want.

Another useful application of this logger was made in a scheduled service,

   1: private void ReactToLogMessages(OnLog4NetLogEventArgs e, StringBuilder log4netLogs, ScanLog scanLog)

   2: {

   3:     if (e.LoggingEvent.Level >= log4net.Core.Level.Info)

   4:     {

   5:         log4netLogs.AppendLine(e.LoggingEvent.RenderedMessage);

   6:     }

   7:     //ora capire il livello di errore

   8:     if (e.LoggingEvent.Level == log4net.Core.Level.Warn)

   9:     {

  10:         scanLog.NumOfWarnings++;

  11:     }

  12:     else if (e.LoggingEvent.Level == log4net.Core.Level.Error)

  13:     {

  14:         scanLog.NumOfErrors++;

  15:     }

  16: }

This function simply append all info or greater logs to a string, but the most important fact is that he count the number of warning and errors, and after the task finish, it warn the administrator if there are errors or more than a certain number of warning. This will make simple to react to task that have problems.