Lifecycle of singleton objects

Some days ago I blogged about an implementation of persistent cache component based on Managed Esent and PersistentDictionary. This component is injected into other components thanks to Inversion of Control and it is a Singleton object.

Being a singleton is a requisite because it permits to different dependent objects to share the same cache, moreover PersistentDictionary does not permit to have multiple instances that insists on the same directory. This will make EsentCache a perfect candidate of Singleton Object Pattern. Now another dilemma arise, since PersistentDictionary implements IDisposable to flush all resources into disk, who will call Dispose on a Singleton object?

The solution is obvious, this is a duty of the Inversion Of Control engine you use and this is another reason to base your architecture with Dependency Injection in mind. With a IoC container, being a singleton is a “property” of registered instance making the IoC container responsible of the lifecycle of the singleton. In my application there is a static IoC class (a service locator pattern) used by infrastructural code to create objects, based on Castle.Windsor container. Since a WindsorContainer is a Disposable object I want to be sure that it got disposed when the application exit.

   1: static IoC()

   2: {

   3:     AppDomain.CurrentDomain.ProcessExit += new EventHandler(CurrentDomain_ProcessExit);

   4:     BaseInitialization();

   5: }

Static constructor will set an handler for ProcessExit event of current domain. This event have a special purpose, as you can read in MSDN documentation

The EventHandler for this event can perform termination activities, such as closing files, releasing storage and so on, before the process ends.

The only drawback is that the execution time of this method is limited as explained in the documentation

Note:

The total execution time of all ProcessExit event handlers is limited, just as the total execution time of all finalizers is limited at process shutdown. The default is two seconds. An unmanaged host can change this execution time by calling the ICLRPolicyManager::SetTimeout method with the OPR_ProcessExit enumeration value.

This could be a problem, because shutting down the IoC engine will call dispose method for each singleton object that was resolved and we cannot anticipate how time this operation will need, but we have no other option to have a fallback technique to avoid people forgetting to call dispose on the IoC container.

The implementation of the event handler is really simple

   1: static void CurrentDomain_ProcessExit(object sender, EventArgs e)

   2: {

   3:     Shutdown();

   4: }

All work is delegated to the Shutdown() event, this will make possible for programs to dispose the IoC engine gracefully. For windows program you can simple put a call to Shutdown before the end of Main() method, this will pose no limit on Shutdown execution time, but if for some reason, people forgets to call Shutdown explicitly when the application is ending, resources will be released thanks to the ProcessExit event handler.

alk.

AoP with castle part 4–Adding AoP to service oriented App

Previous Parts of the series

Part 1 – The basic of interception
Part 2 – Selecting Methods to intercept
Part 3 – The first interceptor

AOP works great if you have clear interfaces where you want to put some standard and shared logic, and a Service Oriented Application falls in this category. A service is just a bunch of methods that will share some common behavior like: Validation, logging, Security etc etc, so it is a good strategy to create interceptors for each one of this behavior and associate them to service classes. The good point is that Castle has a dedicated facility to integrate with WCF, that basically is able to resolve WCF server classes with castle. Setting up such a facility is really simple, and you can follow the instruction found on castle’s site.

Surely the first aspect you can add to a Service is the ability to log every call (seen in previous post), but you can also centralize some other operations such as: Exception management or NHibernate session handling.

Validation is another operations that benefit from being applied with AOP. In SOA scenario, services will usually accepts dto objects containing all data needed to accomplish a certain operation. Dto are used because services usually exposes a coarse grained interface to limit the number of calls needed to accomplish a business operation. This is because each call is usually made across a network and latency forces you to limit number of calls, this lead to a common pattern where a business operation is implemented with a single call accepting a dedicated dto. In our example, to create a new album we need to pass an Album object (in real scenario this object will have a list of songs). It is not a real dto because it is a Domain Object, but for this simple example think that it is a dto J, I’ve decorated entity properties with validation attributes.

   1: [DataMember]

   2: [Required("La proprietà Titolo è richiesta")]

   3: [RangeLength("Il titolo deve essere compreso tra 1 e 50 caratteri", 1, 50)]

   4: public virtual string Title { get; set; }

   5:  

   6: [Required("La proprietà Autore è richiesta")]

   7: [RangeLength("L'autore deve essere compreso tra 1 e 50 caratteri", 1, 50)]

   8: [DataMember]

   9: public virtual string Author { get; set; }

Those attributes are used by a validation library that is able to take an object as input and tell us if it is valid or not. Creating a validation aspect on this structure is straightforward because you can simply validate all objects that are not primitive one, or you can make all of your dto implement a marker interface, or you can use convention over configuration. The important concept is that a service call accepts one or more dto and each one support validation.

   1: public class ValidatorAspect : IInterceptor

   2: {

   3:     #region IInterceptor Members

   4:  

   5:     private DotNetMarche.Validator.Core.Validator validator = new Validator();

   6:  

   7:     public void Intercept(IInvocation invocation)

   8:     {

   9:         foreach (object argument in invocation.Arguments)

  10:         {

  11:             if (!argument.GetType().IsPrimitive) validator.CheckObject(argument);

  12:         }

  13:         invocation.Proceed();

  14:     }

  15:  

  16:     #endregion

  17: }

Method CheckObject() takes an object as argument, and raise a ValidationException if the object does not satisfy some validation rule. With this simple interceptor I’m sure that each call with invalid dto will throw an exception bringing me two distinct advantages: the first is that no invalid object will ever reach a service method and the other one is that the exception raised carries validation errors, so the client is able to understand why the call is failed. Suppose that the Album.Name is a required property, without validation here is the exception returned from a call with invalid arguments.

not-null property references a null or transient valueMusicStore.Entities.Album.Title

Server stack trace: at System.ServiceModel.Channels.ServiceChannel.ThrowIfFaultUnderstood(Message reply, MessageFault fault, String action, MessageVersion version, FaultConverter faultConverter) at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc)Now we have a service that when called with an invalid data will return some useful exception that tells us what exactely is the error.

Clearly this a NHibernate exception and the caller probably does not know anything about NH to understand why the call failed. Moreover it is not a good strategy to let all exceptions flow to caller, especially for security reason.

Now activate validator interceptor and look at what is happening when you invoke the SaveAlbum()with an empty dto: now a ValidationException is raised and the message is.

Title property length should be in range 1-50

Title property is required.

Author property is required.

Author property length should be in range 1-50

This is a really clear exception that can be used from the client to understand why the call failed. Now the next step is send back exception data in a way that is more WCF compliant, this is needed because the above examples where run with this configuration

   1: includeExceptionDetailInFaults="true"

this is a bad practice, because exceptions should not flow to the caller, but if you set this value to false, when an exception occurs on the server, the client gets

The server was unable to process the request due to an internal error. For more information about the error, either turn on IncludeExceptionDetailInFaults ….

WCF has a specific technique to send back exception data to the caller in a Service Oriented way, you should specify with an attribute witch object will carry exception data to the caller, so whenever an exception occurs, the caller will receive a valid response object with data.

   1: [OperationContract, FaultContract(typeof(FaultDetail))]

   2: int Save(Album album);

As you can see, this declaration informs the client that in case of fault, service will return failure details in an object of type FaultDetail.

image

When an exception happens, server code should throw a very specific type of exceptions of type FaultException with the type of exception data as Type parameter, as showed with this code:

   1: throw new FaultException<FaultDetail>(

Thanks to AOP we can wrap every exception of the service and create specific data to be sent back to the client, categorizing types of error. Here is the code.

   1: public void Intercept(IInvocation invocation)

   2: {

   3:     try

   4:     {

   5:         invocation.Proceed();

   6:     }

   7:     catch (FaultException<FaultDetail>)

   8:     {

   9:         throw;

  10:     }

  11:     catch (ValidationException vex)

  12:     {

  13:         throw new FaultException<FaultDetail>(new FaultDetail(vex), "Validation Failed");

  14:     }

  15:     catch (Exception ex)

  16:     {

  17:         Logger.Error("Intercepted generic exception on service call", ex);

  18:         throw new FaultException<FaultDetail>(

  19:             new FaultDetail(ex), new FaultReason(ex.Message));

  20:     }

  21: }

As you can see, if the calls throws a FaultException<FaultDetail> we can simply rethrow (someone else in the stack already packed a correct WCF exception message), but if a validation exception occurs we should pack all validation exception in a FaultDetail object and throw back to the caller. Finally, for every generic exception we can simply pack exception information in FaultDetail object and tells to caller that something gone wrong.

Another interesting interceptor is used to manage the unit of work for single WCF call. For each call it simply check if a session was already created for that call, if not, it creates one and begin a transaction. It catch also all exception and if an exception occurs, it simply rollback the transaction and rethrow.

The advantage of using AoP is the ability to inject common logic in service pipeline to handle common aspect: validation, session management and error handing in a unique and common position.

Alk.

Castle and Automock, avoid resolving properties

I use AutoMockingContainer extensively in my test projects, and I ‘ve build over time an automocking container that satisfy all of my needs. Thanks to Castle Windsor, using complex logic with the AutomockingContainer is a breeze. Suppose you have this ViewModel

image

Figure 1: ViewModel under test

The only aspect I’m interested in is the SelectedLinkResult public property, that have a lot of logic in the set part, this is needed to react on user selection change in the UI. Now when I use AutoMockingContainer to automock this view model I have a big problem, I need to setup a lot of expectations to make the setter logic to work, this because when I try to resolve with automock, my AutoMockingContainer try to resolve each dependency, even properties. To avoid this I need to be able to tell my container

  1. Avoid to resolve any property
  2. Avoid to resolve dependencies with specific name

The second one is needed if I want some properties to be resolved and some other to be excluded, so I needs to extend my container with a couple of properties.

   1: public List<String> DependencyToIgnore { get; set; }

   2:  

   3: public Boolean  ResolveProperties { get; set; }

   4:  

   5: public Boolean CanSatisfyDependencyKey(String dependencyKey)

   6: {

   7:    return !DependencyToIgnore.Contains(dependencyKey);

   8: }

I’ve added a List of property name to ignore, a property that specify if I want properties to be resolved, and finally a simple function that tells if a specific property needs to be resolved. Thanks to Castle, I can use these properties from the subresolver used by the container.

   1: public bool CanResolve(

   2:      CreationContext context,

   3:      ISubDependencyResolver parentResolver,

   4:      ComponentModel model,

   5:      DependencyModel dependency)

   6: {

   7:     bool shouldResolveDependencyKey = 

   8:         dependency.DependencyType == DependencyType.Service &&

   9:         _relatedRepository.CanSatisfyDependencyKey(dependency.DependencyKey);

  10:  

  11:     Boolean resolveIfProperty = 

  12:         !dependency.IsOptional || 

  13:         _relatedRepository.ResolveProperties;

  14:  

  15:     return shouldResolveDependencyKey && resolveIfProperty;

  16: }

First of all I check if the dependencyType is Service and calls the CAnSatisfyDependencyKey of the subresolver, then if the the dependency is optional (a property) and the ResolveProperties is false, I skip resolution. Now I can write a unit test header like this one:

   1: [TestFixture]

   2: [UseAutoMockingContainer(

   3:     new[] { typeof(TreeNavigatorViewModel) }, 

   4:     ResolveProperties = false)]

   5: public class TreeNavigatorViewModelFixture : BaseTestFixtureWithHelper

This to avoid completely Property resolving during the test, the following one is used if I want only the SelectedLinkResult property not to be resolved with a Mock.

   1: [TestFixture]

   2: [UseAutoMockingContainer(

   3:     new[] { typeof(TreeNavigatorViewModel) }, 

   4:     IgnoreDependencies =  

   5:     new string[]

   6:     {

   7:        "SelectedLinkResult"                           

   8:     })]

   9: public class TreeNavigatorViewModelFixture : BaseTestFixtureWithHelper

This makes my tests really simple and simple to write.

alk.

Mock, Service Locator, Automocking container and the hard life of testers

I know, service locator is an antipattern, but sometimes, when you begin to refactor existing code written with no IoC in mind, service locator can help you a little bit in restructuring your code. A service locator pattern work this way: you have some static or gloablly avaliable class named: ServiceLocator or IoC, and every object can query it for service implementation.

image

Figure 1: Simple schema of Service Locator Pattern, each object can query the SL asking for service

When you write unit test, this kind of pattern is really bad, suppose you want to write a test that exercise a class, and verifies that, upon certain condition, a log is generated into the system. If the Class Under Test is getting the logger with a call to Service Locator, we have a problem, because we need to find a way to make the service locator generates a mock, and give us back that mock to verify expectation on it.

If you have a well structured program, you should not use Service Locator, and your object should declare optional or required dependency with the usual technique of DI principle. In this scenario you do not have problem in using mocks or stub, because you can inject them from external code. To simplify both scenario the concept of AutoMockingContainer could help you. Basically an AutoMockingContainer is a IoC Container that work with these simple rules.

1) if the service is registered, resolve it and resolve all dependency with a specific MockStrategy

2) if the service is not registered, returns a mock.

3) make available to the caller all mocks that were created by the container, so the caller is able to issue expectation on them.

The basic tests for the container verify that resolving a registered object will resolve as usual, while when you ask for an unregistered object you got a mock.

[Test]

public void VerifyThatNotRegisteredComponentIsResolvedWithMock()

{

   using (AutoMockingContainer.AutoMockingContainer container = CreateAutoMockingContainer())

   {

       IAnInterface res = container.Resolve<IAnInterface>();

       res.Expect(t => t.DoTest()).Return(false);

       Assert.That(res.GetType().FullName, Is.StringContaining("Proxy"));

   }

}

 

[Test] 

public void VerifyThatImAbleToGetResolvedMockObjects()

{

   using (AutoMockingContainer.AutoMockingContainer container = CreateAutoMockingContainer())

   {

       IAnInterface res = container.Resolve<IAnInterface>();

       var retrieved = container.GetFirstCreatedMock<IAnInterface>();

       Assert.That(res, Is.EqualTo(retrieved));

   }

}

these simple two tests shows me that the AutoMockingContainer is able to resolve unregistered components as Mock Objects and this can helps me to solve the original problem: testing a class that internally uses the Service Locator to resolve something, like an ILogger. Suppose that this is the code of the class.

public class ObjAnInterface : IAnInterface

{

   #region IAnInterface Members

 

   public ObjAnInterface()

   {

       _logger = IoC.Resolve<ILogger>();

   }

 

   private ILogger _logger;

 

   public bool DoTest()

   {

       _logger.Debug("DoTestCalled");

       return true;

   }

 

   #endregion

}

The bad point is in the constructor, when the class asks to Service Locator for an ILogger. If we need to test this class and have a LOT of parts in the code where the class is created like this

var xxx = new ObjAnInterface()

it could be feasible to keep the service locator, and get rid of it during some later refactoring. I do not want to question on “it worth spending time to get rid of Service Locator?”, I want only to solve the scenario when you have no choice, the Service Locator is there and you cannot remove it.

In my test I want to be able to verify that the ILogger.Debug method is called when I call the DoTest() method of the class. First of all we need our AutoMockingContainer to be able to pass this test.

IAnInterface res1 = container.Resolve<IAnInterface>();

IAnInterface res2 = container.Resolve<IAnInterface>();

IAnInterface res3 = container.Resolve<IAnInterface>();

var retrieved = container.GetFirstCreatedMock<IAnInterface>();

Assert.That(res1, Is.EqualTo(retrieved));

var list = container.GetMock<IAnInterface>();

Assert.That(list, Is.EquivalentTo(new[] {res1, res2, res3}));

This simple test verifies that the container is able to resolve mocks, and the caller is able to get those mocks to setup expectation on them.

This is not enough, we need the Service Locator to declare an internal method that can be used to override the real Service Locator for testing purpose:

internal static IDisposable OverrideInstance< T > ( T instance )

{

    overrideInstances.Add ( typeof ( T ) , instance );

    return new DisposableAction ( () => overrideInstances.Remove ( typeof ( T ) ) );

}

Now it is possible for me writing code like this.

using (AutoMockingContainer.AutoMockingContainer container = SetupIoCWithAutomock())

{

    container.Register(Component.For<IAnInterface>()

        .ImplementedBy<ObjAnInterface>());

    IAnInterface res = container.Resolve<IAnInterface>();

    res.DoTest();

    var mock = container.GetFirstCreatedMock<ILogger>();

    mock.AssertWasCalled(m => m.Debug(Arg<string>.Is.Anything));

}

And the SetupIoCWithAutomock function is where the hard part takes place, it creates an AutoMockingContainer, and use the override method to instruct Service Locator to use that container until the test is end.

private AutoMockingContainer.AutoMockingContainer SetupIoCWithAutomock()

{

   IWindsorContainer windsorContainer = 

        new AutoMockingContainer.AutoMockingContainer();

   DisposeAtTheEndOfTest(IoC.OverrideEngine(new CastleIoC(windsorContainer)));

   return (AutoMockingContainer.AutoMockingContainer) windsorContainer;

}

This example shows one of the reason why Service Locator is bad, it makes your tests complex because you have to take care of all these greedy details. With some infrastructure class you can improve the test in this way.

Untitled

Figure 2: UseAutomockingCotnainer attribute permits you to automatically declare that this Test Fixture will use Automock in the Service Locator

In Figure 2 you can see that with a simple attribute I’m able to declare that for this TestFixture the global Service Locator class should be overridden with an AutoMockingContainer, then with a simple extension method (called AutoMockingContainer()) I’m able to get the container for that specific test and ask for Mock autogenerated by the container to setup expectation on them.

This makes the code more readable.

Alk.

AOP With castle–Part 3-The first interceptor

Previous Parts of the series

Part 1 – The basic of interception
Part 2 – Selecting Methods to intercept

Usually the very first interceptor you can build with an AOP framework is the “logger Interceptor”, because it is simple and useful, especially when you expose some services with WPF. Consider this scenario: you expose some services with WPF, sometimes people tell you that your services have bugs or they encountered an exception, or they get wrong result, etc. In this situation you receive information like:

“I got exception from your service”, and no more details, so you need to spent time trying to understand what is happened.

Since this is probably the most detailed information you can have from the customer J, having a good log system is vital. You need to identify what is happened and you need to build a system to log:

  • every call to service methods
  • all parameters value for the call
  • return value and exception that are raised (if one).

The goal is having a full and configurable log of what is happened in the service to retrace exception and problems experienced by users. Now wait and avoid the temptation to modify every service class in the system adding log calls since you can use AOP to add logging capability to each intercepted service. Such an interceptor is really simple to build, and with few lines of code you can log every call made to a function of intercepted class.

As a requisite, you need to instantiate wcf service classes with CastleIntegration facility and Logging integration facility; thanks to these two facilities you are able to resolve the service class with castle (thus adding interceptor), and use the log4net integration to avoid hardcoding logging to specific target, like file etc. The good point in log4Net castle integration is the ability to declare a dependency to a generic ILogger interface. This feature can be used in the Logging Interceptor.

   1: public class LogAspect : IInterceptor

   2: {

   3: public LogAspect(ILogger logger)

   4: {

   5:     Logger = logger;

   6: }

   7:  

   8: public ILogger Logger { get; set; }

Castle log4net facility is able to resolve the ILogger dependency with an implementation of log4net logger that has a name equal to the name of the class that declares dependency. This is the most important point, because in the above code, the logger will have the name MusicStore.Aspects.Log.LogAspect, and this permits me to change the log setting for each single class in the system. The interceptor will need also a little helper function to create a string that dump every details of the call.

   1: public static String CreateInvocationLogString(IInvocation invocation)

   2: {

   3:     StringBuilder sb = new StringBuilder(100);

   4:     sb.AppendFormat("Called: {0}.{1}(", invocation.TargetType.Name, invocation.Method.Name);

   5:     foreach (object argument in invocation.Arguments)

   6:     {

   7:         String argumentDescription = argument == null ? "null" : DumpObject(argument);

   8:         sb.Append(argumentDescription).Append(",");

   9:     }

  10:     if (invocation.Arguments.Count() > 0) sb.Length--;

  11:     sb.Append(")");

  12:     return sb.ToString();

  13: }

Since parameters of the service could be complex objects I’m dumping information with an helper function that is able to include every detail of a class.

   1: private static string DumpObject(object argument)

   2: {

   3:     Type objtype = argument.GetType();

   4:     if (objtype == typeof(String) || objtype.IsPrimitive || !objtype.IsClass)

   5:         return objtype.ToString();

   6:     return DataContractSerialize(argument, objtype);

   7: }

I want to keep the code simple, if the object type is not primitive I use DataContractSerialize to dump the content in XML format. Once everything is in place I need to insert the call to the logger in the appropriate point.

   1: public void Intercept(IInvocation invocation)

   2: {

   3:     if (Logger.IsDebugEnabled) Logger.Debug(CreateInvocationLogString(invocation));

   4:     try

   5:     {

   6:         invocation.Proceed();

   7:     }

   8:     catch (Exception ex)

   9:     {

  10:         if (Logger.IsErrorEnabled)  Logger.Error(CreateInvocationLogString(invocation), ex);

  11:         throw;

  12:     }

  13: }

Before each call to the logger object I first check if the appropriate level of logging is enabled. This technique is useful to avoid loss of performance when log is not enabled; if the debug level is set to warn, the Logger.Debug will not log anything and the CreateInvocationLogString will build the log string for nothing, losing processor time with no benefit. To avoid this loss you can issue a call to Logger.IsDebugEnabled to avoid entirely the call to logging function.

Now suppose that the caller pass an invalid object to method Save() of MusicStoreService, the user will see a message telling that a service error is occurred, but now I’m able to check the log to understand exactly what is happened. Here is the log of a call that raise an exception.

   1: 2010-07-24 10:12:28,320 ERROR MusicStore.Aspects.Log.LogAspect - Called: MusicStoreService.Save(<Album xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/MusicStore.Entities">

   2:   <Id>0</Id>

   3:   <Author

   4:     i:nil="true" />

   5:   <Genre

   6:     i:nil="true" />

   7:   <Image

   8:     i:nil="true" />

   9:   <Label

  10:     i:nil="true" />

  11:   <Note

  12:     i:nil="true" />

  13:   <PublicationDate>0001-01-01T00:00:00</PublicationDate>

  14:   <Title

  15:     i:nil="true" />

  16:   <Tracks

  17:     i:nil="true" />

  18: </Album>)

  19: NHibernate.PropertyValueException: not-null property references a null or transient valueMusicStore.Entities.Album.Title

  20:    at NHibernate.Engine.Nullability.CheckNullability(Object[] values, IEntityPersister persister, Boolean isUpdate)

  21:    at NHibernate.Event.Default.AbstractSaveEventListener.PerformSaveOrReplicate(Object entity, EntityKey key, IEntityPersister persister, Boolean useIdentityColumn, Object anything, IEventSource source, Boolean requiresImmediateIdAccess)

  22:    at NHibernate.Event.Default.AbstractSaveEventListener.PerformSave(Object entity, Object id, IEntityPersister persister, Boolean useIdentityColumn, Object anything, IEventSource source, Boolean requiresImmediateIdAccess)

  23:    at NHibernate.Event.Default.AbstractSaveEventListener.SaveWithGeneratedId(Object entity, String entityName, Object anything, IEventSource source, Boolean requiresImmediateIdAccess)

From this log I understand that an invalid object is passed to the service, the property Album.Title is required in the database, but the user passed a property with null value. Since log4net is really flexible I’m able to dump this information to a file, to a database, to network or with mail. You can as example send a mail each time an exception occurs, so you are immediately notified if something in the service is not going well.

This logger can be improved a little bit because the name of the logger is always MusicStore.Aspects.Log.LogAspect for each wrapped service. This is not really a problem, but I prefer to have the ability to configure logging differently for each service; in real product with a lot of services, this is a key requiremente. The interceptor can be changed in this way:

   1: public class LogAspect : IInterceptor

   2: {

   3:     public LogAspect(ILoggerFactory loggerFactory)

   4:     {

   5:         LoggerFactory = loggerFactory;

   6:         Loggers = new Dictionary<Type, ILogger>();

   7:     }

   8:  

   9:     public ILoggerFactory LoggerFactory { get; set; }

  10:  

  11:     public Dictionary<Type, ILogger> Loggers { get; set; }

Now the interceptor declares a dependency to an ILoggerFactory and not to a concrete ILogger, and caches a list of ILogger object based on type. The result is a concrete ILogger object for each wrapped type.

   1: public void Intercept(IInvocation invocation)

   2: {

   3:     if (!Loggers.ContainsKey(invocation.TargetType))

   4:     {

   5:         Loggers.Add(invocation.TargetType, LoggerFactory.Create(invocation.TargetType));

   6:     }

   7:     ILogger logger = Loggers[invocation.TargetType];

   8:     if (logger.IsDebugEnabled) logger.Debug(CreateInvocationLogString(invocation));

instead of using the same logger, we first check if we had already created a logger for a given type, if false we use the ILoggerFactory to create the logger and cache it to an inner dictionary. If we send an invalid object again to the service the head of the log is.

   1: 2010-07-24 10:27:30,783 DEBUG MusicStore.WebService.MusicStoreService - Called: MusicStoreService.Save(..

Now the name of the logger is equal to the name of the concrete service class and you have created a simple logging system that can:

  1. Add transparently to each service class without needing a single line of code
  2. Change logging level for each concrete class of the service.

Happy intercepting with Castle :)

Alk.