Misusing an ORM

I’ve blogged some time ago that I’m starting to consider ORM an Antipattern, and recently Mr Fowler posted similar thoughts in his bliki, moreover I have the pleasure to be one of the organizer of the first RavenDB official Course in Italy, with my dear friend Mauro as teacher.

Since I’m strongly convinced that in a full OOP approach to problem objects should not have nor setter nor getter, most of the work and complexities of an ORM is simply not needed, because you usually retrieve objects from the storage with only one function GetById and nothing else. In my long experience with NHibernate, I verified that most of the problem arise when you need to show data in UI in specific format and you start to write complex Query in HQL or ICRiteria or LINQ, then you need to spend time with NHProfiler to understand if the queries are good enough to run on production system and when objects changes a little bit you need to rewrite a lot of code to suite the new Object Model. This last point is the real pain point in DDD, where you usually should create Object Model that will be manipulated a lot before reaching a good point, after all the main value of DDD approach is being able to create a dialog with a DOMAIN EXPERT and it is impossible to find a good Object Models at the first tentative. If refactoring a model become painful, you are not allowed to modify it with easy, you are going away from DDD approach.

This is where CQRS can help you, for all objects belonging to the domain you need only to Save, LoadById, Update and delete, because every read model should be defined somewhere else. In such a scenario an ORM is really useful, because if you need to store objects inside Relational Database you can leave the ORM all the work to satisfy the CRUD part, where the R is the method GetById. To start easily with this approach you can create SQL View or stored procedures for all the Read Models you need; this imply that whenever the structure of the Domain Model changes, you need only to change all affected Read Models, some view and some stored procedure, but you have no need to refactor the code.

In this situation the ORM can really helps you, because if you change the Domain Model, you should only change the mapping, or let some Mapping by convention do this for you (ConfORM for NH is an example), regenerate the database and update only affected Read Models. If your domain is really anemic, if you expose properties from objects, even only with getters, whenever you change a domain class you should answer the question “If I change this property, what other domain objects will be affected? How many service class will be affected? How many query issued from Views will be affected?”. If you are not able to create a Read Model with SQL View or stored procedure, you can write a denormalizer that listens for DOMAIN EVENTS and populate the Read Model accordingly. In my opinion this is the scenario where an ORM can really helps you.

In such a situation a NoSql database can dramatically simplify your life, because you do not need an ORM anymore, cause you are able to save object graps into the storage directly, and you can create Read Models with Map/Reduce or with denormalizers.

But sadly enough, ORM are primarily used to avoid writing SQL and persist completely anemic domain, where all the logic reside on services. In such a scenario it is easy to abuse an ORM and probably in the long term the ORM could become much more a pain than a real help.

Gian Maria.

Getting the list of Type associated to a given export in MEF

One of the problem I had to solve to make WCF and MEF live together,  is knowing all the types discovered by MEF at runtime for a given export. This information is really important because I need the list of type that derived from Request and Response to inform WCF of all the KnownTypes available to the service.  First of all let’s see how I initialized MEF engine

private static CompositionContainer theContainer; 
private static DirectoryCatalog catalog;

static MefHelper() 
{ 
    catalog = new DirectoryCatalog(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)); 
    theContainer = new CompositionContainer(catalog); 
}

The catalog is instructed to load everything that is located in the current directory, a configuration that is suitable for my simple WCF Request / Response service; then I need to change the DynamicKnownType class to get the list of exported types loaded by MEF.

class DynamicKnownType 
{ 
    static List<Type> knownTypes;

    static DynamicKnownType() 
    { 
        knownTypes = new List<Type>(); 
        knownTypes.AddRange(MefHelper.GetExportedTypes<Request>()); 
        knownTypes.AddRange(MefHelper.GetExportedTypes<Response>()); 
    }

The GetExportedType() method is a little tricky, because MEF does not offer such a functionality out of the box, so I need to search inside information available from the catalog to identify all loaded types related to a specific Export. The code is quite simple and it is composed only by few lines.

public static IEnumerable<Type> GetExportedTypes<T>() 
{ 
    return catalog.Parts 
        .Select(part => ComposablePartExportType<T>(part)) 
        .Where(t => t != null); 
}

private static Type ComposablePartExportType<T>(ComposablePartDefinition part) 
{

    if (part.ExportDefinitions.Any( 
        def => def.Metadata.ContainsKey("ExportTypeIdentity") && 
            def.Metadata["ExportTypeIdentity"].Equals(typeof(T).FullName))) 
    { 
        return ReflectionModelServices.GetPartType(part).Value; 
    } 
    return null; 
}

You can find all type related to an export because MEF Catalog contains a property named Parts that is an IEnumerable of ComposablePartDefinition, where each instance contains full details on a type discovered by MEF. For each ComposablePartDefinition I can cycle inside all ExportDefinitions list to find if one ExportDefinition is related to the type I’m looking for. This specific information is not exposed directly, but it is contained in the Metadata associated to the ExportDefinition in a key called “ExportTypeIdentity” that contains the FullName of exported Type.

If one of the ExportDefinition exports the type I’m searching for I can finally use the ReflectionModelService.GetPartType() static method to find the type of dynamically imported class. This simple method make possible to discover the list of all concrete classes loaded by MEF that inherit from Request or Response to create the list of KnownTypes for WCF.

08-05-2012 19-26-21

Figure 1: From the Wcf Test Client you can choose between all the requests that were dynamically loaded by MEF

Example can be downloaded here.

Gian Maria.

How to instantiate WCF host class with MEF

I described in the last post of the series the structure behind the Request/Reponse service based on MEF, now it is time to explain how to make MEF and WCF happily live together. In the first version I hosted the service with these simple lines of code

using (ServiceHost host = new ServiceHost(typeof(CoreService)))
{
    host.Open();

Basically all I needed to do is to create a ServiceHost specifying in the constructor the type of the class that implements the service and let WCF to take care of every details about the creation of the concrete instances that will answer to the requests.

In this new version of the service the CoreService class cannot be anymore created with default constructor, because I need to construct the instance with MEF; so I need to instruct WCF to create the CoreService class with MEF.

What I need is concrete implementation of the IInstanceProvider, an interface used by WCF to manage the creation of concrete classes that implements my service.

class CoreServiceInstanceProvider : IInstanceProvider
{
    public object GetInstance(System.ServiceModel.InstanceContext instanceContext, System.ServiceModel.Channels.Message message)
    {
        return MefHelper.Create<ICoreService>(); 
    }

    public object GetInstance(System.ServiceModel.InstanceContext instanceContext)
    {
        return GetInstance(instanceContext, null);
    }

    public void ReleaseInstance(System.ServiceModel.InstanceContext instanceContext, object instance)
    {
            
    }
}

My implementation is super simple, I only need to use the MefHelper.Create() method to let MEF create the class and compose everything, but I need another couple of classes to instruct WCF to use my CoreServiceInstanceProvider to instantiate classes for my service. First class is a Service Behavior, represented by a class that implements IServiceBehavior to make WCF use my CoreInstanceProvider, then I create another class that inherits from ServiceHost to automatically add this behavior to the WCF host.

public class CoreServiceBehavior : IServiceBehavior
{
    public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
    {
        foreach (ChannelDispatcherBase cdb in serviceHostBase.ChannelDispatchers)
        {
            ChannelDispatcher cd = cdb as ChannelDispatcher;
            if (cd != null)
            {
                foreach (EndpointDispatcher ed in cd.Endpoints)
                {
                    ed.DispatchRuntime.InstanceProvider = new CoreServiceInstanceProvider();
                }
            }
        }
    }

    public void AddBindingParameters(ServiceDescription 
        serviceDescription, 
        ServiceHostBase serviceHostBase, 
        Collection<ServiceEndpoint> endpoints,
        BindingParameterCollection bindingParameters)
    {
    }

    public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
    {
    }
}

public class CoreServiceHost : ServiceHost {

    public CoreServiceHost() : base(typeof(CoreService))
    {
        
    }

    protected override void OnOpening()
    {
        Description.Behaviors.Add(new CoreServiceBehavior());
        base.OnOpening();
    }
}

With these helper classes, hosting the service in a simple console application is a breeze.

using (ServiceHost host = new CoreServiceHost())
{
    host.Open();

Et voilà, two lines of code and I’m able to start the service where every instance of the service is created by MEF.

Gian Maria.

Traffic light vNext

It is a long time I did not post about simple Traffic Light experiment. I’ve ended with a super simple Domain with no Getters and no Setters, but there is still something I really do not like about that sample and it is represented by this test.

[Fact]
public void When_both_semaphore_are_red_one_become_green()
{

    //move all to yellow fixed
    using (DateTimeService.OverrideTimeGenerator(DateTime.Now.AddSeconds(1)))
    {
        sut.Tick();
    }
    _domainEvents.Clear();
    //now move again to both red
    using (DateTimeService.OverrideTimeGenerator(DateTime.Now.AddSeconds(10)))
    {
        sut.Tick();
    }
    _domainEvents.Should().Have.Count.EqualTo(2);
    _domainEvents.All(e => e.NewStatus == LightColor.Red).Should().Be.True();
    _domainEvents.Clear();
    using (DateTimeService.OverrideTimeGenerator(DateTime.Now.AddSeconds(1)))
    {
        sut.Tick();
    }
    //only one semaphore should be go to green state
    _domainEvents.Should().Have.Count.EqualTo(1);
    _domainEvents[0].NewStatus.Should().Be.EqualTo(LightColor.Green);
}

This test is quite ugly and it requires some initialization code, contained in the constructor of the test class.

public TwoTrafficLightFixture()
{
    DomainEvents.ClearAllRegistration();
    CommunicationBus.ClearAllRegistration();

    DomainEvents.Register<ChangedLightStatus>(this, e => _domainEvents.Add(e));
    sut = CrossRoadFactory.For(2.Roads()).Create();
    sut.Start();
}

The bad part about this code is that I want to test this situation: when both traffic light are in red state, only one of them can become green after a given amount of time passed. The awful part about this test is how I setup the fixture; to bring both the Traffic Light in the Red state, I need to create the CrossRoadFactory in the constructor of the test (this is because all tests share this common initialization), then I need to call Tick() several time simulating passing time and moving the system from the initial status to the status that represents the fixture (both light red).

This test is simply wrong, because if if fails you cannot tell if the failure is caused by the initialization code or the real part of the domain logic you want to test, because I’m actually exercising the SUT until it reach the status I want to test with my unit test and this can cause a failure in the Fixture. This is done because if the Traffic Light has no public properties and it is not possible to manipulate the status directly in the test.

To simplify the test I need a way to change the status of a Traffic Light, so I can simply fixture creation and the obvious solution is to use Event Sourcing. I do not want to evolve the Traffic Light to a full Event Sourcing enabled domain, but I wish to verify if having the ability to reconstruct the state of a Domain Object from domain events he raised in the past can solve my test smell. I decided to add a constructor on the TrafficLight domain class that permits to create an instance from a sequence of Domain Events.

public TrafficLight(params BaseEvent[] eventStream)
{
    ActualState = YellowBlinkingState.Instance();
    CommunicationBus.Register<MayITurnGreen>(this, MayITurnGreen);
    Load(eventStream);
}

public void Load(params BaseEvent[] eventStream)
{
    foreach (var eventInstance in eventStream)
    {
        HandleChangedLightStatus(eventInstance as ChangedLightStatus);
    }
}

This is really primitive, it is really far from being a real entity based on Event Sourcing, but the key concept is, I want to be able to reconstruct the private state of an entity simply passing a series of domain events that he raised in the past. Now the previous test can be modified to make it really simpler:

[Fact]
public void When_both_semaphore_are_red_one_become_green()
{

    TrafficLight first = new TrafficLight(new ChangedLightStatus(LightColor.Red, null));
    TrafficLight second = new TrafficLight(new ChangedLightStatus(LightColor.Red, null));
    CrossRoad theSut = new CrossRoad(
        new TrafficLightCreated(first),
        new TrafficLightCreated(second));

    using (DateTimeService.OverrideTimeGenerator(DateTime.Now.AddSeconds(1)))
    {
        theSut.Tick();
    }

    //only one semaphore should be go to green state
    _domainEvents.Should().Have.Count.EqualTo(1);
    _domainEvents[0].NewStatus.Should().Be.EqualTo(LightColor.Green);
}

Now I’m able to create two traffic light passing a ChangedLightStatus event that bring the Traffic Light to the status requested by the fixture, then I can create the CrossRoad class passing two TrafficLightCreated domain events and this is all the code I need to setup the fixture of my test. Now I can simulate that one second is passed, call Tick() function on the CrossRoad and verify that only one Domain Event is raised, because only one Traffic Light Should have changed the value of the light from Red to Green.

Even with this super simple example you can understand that Event Sourcing simplify your tests, because they give you the ability to recreate a specific state of the domain under test from a stream of Domain Events.

Gian Maria.

Evolving Request Response service to separate contract and business logic

Example can be downloaded here.

I previously described a scenario where the customer needs a really basic Request Response service in WCF, the goal is being able to take advantage of a request / response structure, but with an approach like: “the simpliest thing that could possibly works”. This technique is usually needed to introduce new architectural concepts in a team without requiring people to learn a huge amount of concepts in a single shot, a scenario that could ends in a  team that actively fight the new architecture because it is too complex.

Once the team is used to the basic version of the Request/response service and understand the advantage of this approach, it is time to evolve it towards a more mature implementation, and since the grounding concepts are now clear, adding a little bit of extra complexity is usually a simple step. This is the concept of Evolving Architecture or Emergent Design, the goal is to introduce functionality and adding extra complexity only to answer a requirement need and not for the sake of having a Complete/Complex architecture. After a little bit of usage of the basic version of the Request Response service, some new requirements lead to an improvement of the basic architecture. The very first problem of the actual basic architecture is: contract and implementation are contained in the same class.

Class diagram of a sample Request class

Figure 1: Class diagram of a sample Request class

In Figure 1 you can see a sample request, it is called AddTwoNumber and it contains both the contract definition and the business logic that execute the request. This coupling is too high and the new requirement ask to separate contract from the business logic and also requires to evolve the architecture to make it possible loading contracts and business logic from separate assembly using the concept of Plugin.

This new requirement can be solved easily with MEF, a library that will take care of everything about discovering and loading all request / response / handler objects that compose our service. I started removing the Execute method from the basic Request class and moving it to another class Called Handler, that will take care of the execution of a request.

New version of the base Request and Response classes

Figure 2: New version of the base Request and Response classes

As you can see from Figure 2 Request and Response class are now only just base contract classes, with no method related to execution; they contains only properties. To execute a request and return a Response we need another class called Handler, that is capable of Handling a request and returning a response. The key concept is that for each request we have a separate handler that is capable of executing that request

image

I decided to introduce a basic abstract class with no generic, this base class is able to handle a Request object and then I inherited another abstract class called Handler<T> capable of handling a specific type of request, here is the full code.

[InheritedExport]
public abstract class Handler
{
    public Response Handle(Request request)
    {
        return OnHandle(request);
    }

    protected abstract Response OnHandle(Request request);

    public abstract Type RequestHandledType();
}

public abstract class Handler<T> : Handler where T : Request
{

    protected override Response OnHandle(Request request)
    {
        return HandleRequest((T) request);
    }

    protected abstract Response HandleRequest(T request);

    public override Type RequestHandledType()
    {
        return typeof(T);
    }
}

The key point in this structure is: the base Handler class has MEF Specific InheritedExport attribute, that basically tells to MEF engine to automatically Export all types that inherit from this base type. The basic Handler class has a RequestHandledType() method to specify the concrete Request class executed by this handler, this permits me to override it in the Handler<T> just returning typeof(T). The same InheritedExport attribute is then added to Request and Response class to make them loadable by MEF. The cool part is that everything related to discovering Requests, Responses and Handlers is done by MEF. All MEF functionalities are shielded by a simple MefHelper class.

public static class MefHelper
{
    private static CompositionContainer theContainer;
    private static DirectoryCatalog catalog;

    static MefHelper()
    {
        catalog = new DirectoryCatalog(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location));
        theContainer = new CompositionContainer(catalog);
    }

    public static void Compose(Object obj)
    {
        var cb = new CompositionBatch();
        cb.AddPart(obj);
        theContainer.Compose(cb);
    }

    public static T Create<T>()
    {
        return theContainer.GetExportedValue<T>();
    }

    public static CoreService CreateService()
    {
        return Create<CoreService>();
    }
}

Code is minimal, I simply create a MEF catalog to scan all assemblies that are in the same directory of the executing assembly and a couple of helper methods to simplify composition and exporting value. The key method here is the CreateService() that internally uses MEF to create a concrete implementation of the CoreService class, where CoreService is the class that is exposed as a WCF service.

[Export(typeof(ICoreService))]
public class CoreService : ICoreService
{
    #region ICoreService Members

    private Dictionary<Type, IList<Handler>> HandlerForTypes = new Dictionary<Type, IList<Handler>>();

    private IList<Handler> GetHandlersForType(Type type) {
        if (!HandlerForTypes.ContainsKey(type)) {
            HandlerForTypes.Add(type, new List<Handler>());
        }
        return HandlerForTypes[type];
    }

    [ImportingConstructor]
    public CoreService([ImportMany(typeof(Handler))] IEnumerable<Handler> handlers)
    {
        foreach (var handler in handlers)
        {
            GetHandlersForType(handler.RequestHandledType()).Add(handler);
        }
    }

CoreService class was modified to use this new architecture, first of all I added the Export attribute to tell MEF that this class is an Export for the ICoreService class service, then I added a simple Dictionary to associate each request with the corresponding handler and finally I added a Cosntructor with the ImportingConstructorAttribute and the ImportMany attribute on the single parameter of IEnumerable<Handler>. This specific constructor tells MEF that CoreService class needs the list of all Handlers discovered by MEF and it is the magic attribute that permits you to make MEF scan all dll in the current directory to find every class that inherit Handler basic abstract class. In the constructor there is a simple foreach used to associate each handler to the concrete Request that it is capable to handle, this is accomplished with RequestHandledType() discussed previously.

public Infrastructure.Response Execute(Request request)
{
    try
    {
        var handlerList = GetHandlersForType(request.GetType());
        if (handlerList.Count == 0)
            return new Response() { IsSuccess = false };
        if (handlerList.Count == 1) 
            return handlerList[0].Handle(request);

        throw new NotSupportedException();
    }
    catch (Exception ex)
    {
        return new Response() { IsSuccess = false, ErrorMessage = "Exception during execution" };
    }
}

The execute method is really simple, for each request I verify if an appropriate Handler was available, if I have no handler I return an error, if I have a single Handler I simply use it to execute the Request and finally if I have more than one single Handler I throw an Exception because this is an unsupported scenario for this version. Thanks to MEF and very few lines of code I was able to evolve the basic structure in a more complex architecture where the CoreService dynamically loads Request/Response/Handlers of the concrete implementation.

Now you can take the old AddTwoNumber request from previous example and evolve it to fit this new architecture. The only operation we need to do is removing the Execute() method from the request and move in an appropriate Handler as shown in this simple snippet.

public class AddTwoNumberHandler : Handler<AddTwoNumberRequest>
{
    protected override Response HandleRequest(AddTwoNumberRequest request)
    {
        return new MathOperationResult(request.FirstAddend + request.SecondAddend);
    }
}

The code to implement a business operation is really minimal, just inherit from Handler<AddTwoNumber> override the Handle request and let the infrastructure takes care of everything else. A working example can be downloaded here and in future posts I will explain all the other parts of this infrastructure related to MEF and runtime discovering of plugin.

Gian Maria