How to add reference to Microsoft.VisualStudio.Designer.Interfaces

I need to create a project and add references to Microsoft.VisualStuidio.Designer.Interfaces, since it does not get listed on the standard reference pane I resort editing the csproj manually adding the references.

<Reference Include=”Microsoft.VisualStudio.Designer.Interfaces, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a”>

<SpecificVersion>False</SpecificVersion>

<HintPath>D:\Program Files\Microsoft Visual Studio 8\Common7\IDE\Microsoft.VisualStudio.Designer.Interfaces.DLL</HintPath>

</Reference>

Someone could suggest a better and less hardcore way?

Alk.

Write a custom lifecycle for castle windsor

In another post I spoke about castle Windsor and lifecycle of objects, now I want to show how to write a custom lifecycle. First of all the class must inherit from AbstractLifestyleManager class, then we must choose where to store the instance of the object that are created by the container. Since I want to build a lifecycle that can be used both from winform code and from web code I choose to use System.Runtime.Remoting.Messaging.CallContext class, this class is used internally to implement the HttpContext so it can be used to store object for a HttpCall. But I need another behavior, I need to programmatically manage the lifecycle from calling code, this is especially useful for testing purpose. The result is a class called ManageableLifeCycle used to implement a custom lifecycle that can be managed from called code.

The final result should be a lifecycle that is transient by default, but that can become singleton when the caller needs it, and only for a small amount of time. First step is making our custom lifecycle overrides the Resolve() method, that is called by castle when he need to resolve an object. This is the code.

public override object Resolve(global::Castle.MicroKernel.CreationContext context) {
   
if (contextBag != null) {
      
//We are in a context
      
if (_instance == null) {
         _instance = 
base.Resolve(context);
         contextBag.Add(_instance);
      }
      
return _instance;
   } 
else {
      
//We are not in a context
      
return base.Resolve(context);
   }
}

First of all you should know that castle creates a lifecycle manager for each defined components. Our lifecycle defines an arraylist called contextBag that contains all the instance that are created in a context, the context is defined from calling code, so if the calling code does not create a context the lifecycle of the object is the same as transient. As you can see the code first checks if the contextBag is null, if is null it simply returns base.Resolve() actually creating a new entity for each call. If the contextBag is not null we are inside a context, so we create the object and store it into a inner field called _instance. This means that our context works like a singleton inside a context and as transient outside the context. Please note that the instance created is also stored inside the arraylist stored in the CallContext. Now the lifecycle must also override the Release() method, called when the client code calls Release() on an instance created through the container.

public override void Release(object instance) {
   
if (contextBag == null) {
      
base.Release(instance);
   } 
else {
      
if (!contextBag.Contains(instance))
         
base.Release(instance);
   }
}

If we are not in a context (contextBag != null) the lifecycle calls the base version of Release, but if we are in a managed cycle we release the object only if it is not contained in the contextBag, this is needed to mimic the same behavior of the singleton lifecycle, this means that calling Release inside a Contex does not release anything. Ok, now the goals it to make this test pass.

public void TestInsideContext() {
   
using (IoC.OverrideGlobalConfiguration(@”Castle\Windsor\ManageableLifeCycle\WindsorConfig.xml”)) {
      
ITest dt1, dt2;
      
using (ManageableLifeCycle.BeginThreadContext()) {
         dt1 = 
IoC.GetObject<ITest>();
         dt2 = 
IoC.GetObject<ITest>();
         
Assert.AreSame(dt1, dt2);
      }
      
//Context is ended now all requested object are new.
      
Assert.AreNotSame(dt1, IoC.GetObject<ITest>());
      
//Check that object are really disposed when we are outside of a context
      
Assert.IsTrue(dt1.IsDisposed, “object gets not disposed correctly at the end of the context”);
   }
}

When we call static method BeginThreadContext() we are actually asking to our lifecycle manager to go in singleton mode beginning a contet, but when the using block is ended the behavior is reverted to singleton. This is not enough because all objects created inside the context should be disposed if they implement the IDisposable interface. First of all let’s see the code to begin a context

public static DisposableAction BeginThreadContext() {
   
if (contextBag != null)
      
throw new InvalidOperationException(“Another thread context was already begun”);
   
CallContext.SetData(dataId, new ArrayList());
   
return new DisposableAction(delegate() {
      EndThreadContext();
   });
}

We simply store a new arraylist in System.Runtime.Remoting.Messaging.CallContext class using a guid as key identifier, then we create a disposable action that simply call EndThreadContext.

public static void EndThreadContext() {
   
if (contextBag == null)
      
throw new InvalidOperationException(“No thread context is begun.”);
   
foreach (object obj in contextBag) {
      
IDisposable d = obj as IDisposable;
      
if (d != null)
         d.Dispose();
   }
   
CallContext.FreeNamedDataSlot(dataId);
}

The EndThreadContext simply iterate through the list of created objects, and if an object implements IDisposable it simply calls the Dispose Method. Now if you create a simple httpModule that calls BeginThreadContext at the beginning of a request and calls EndThreadContext at the end of the same request you have singleton behavior only for the scope of the WebRequest, and you can use the BeginThreadContext in your test to create a simple scope where the lifecycle of managed object are really singleton object but gets disposed and released when the scope ends.

Relating to an older post of mine where I spoke about the importance to call Container.Release, please note that this custom lifecycle manager, does not store any reference to the object when there is no context, so it does not calls Dispose when the container is disposed nor it risk to have leak if you forget to call Container.Release.

Alk.

P.S If you are interested in the code you can checkout with subversion at http://nablasoft.googlecode.com/svn/trunk/, please excuse the messy of the code, since this repository is only a place where I do various experiments.

Complex fixture teardown

Download the code of the post.

Have you ever deal with tests having really complex fixtures? Sometimes it happens for projects that are not designed for testability, quite often you need to refactor, you begin to prepare a series of basic tests, but the interactions between the components of the system are really complex, and when a test gone wrong the whole suite is compromised. Yesterday I dealed with a series of tests that needed a really complex fixture, composed by classes that opened sockets and did all sort of complex thing (remoting, socket, shared variable, etc etc). The first test suite was a pity result, when a test failed the fixture was not cleared well, all remaining test will fail and it is very difficult to understand where is the error. Let’s try to simulate the same situation: here is a possible test.

SomeClass test1;
SomeDisposable test2;
SomeDisposable test3;
 
[
SetUp]
public void SetUp() {
   test1 = 
new SomeClass();
   test1.Init();
   test1.AddSomething();
   test2 = 
new SomeDisposable(1);
   test2.DoSomething(100);
   test3 = 
new SomeDisposable(2);
   test3.DoSomething(50);
}
 
[
TearDown]
public void TearDown() {
   test3.UndoSomething();
   test3.Dispose();
   test2.UndoSomething();
   test2.Dispose();
   test1.RemoveSomething();
}
 
[Test]
public void Test1() {
   
Console.WriteLine(“Exercise SUT”);
}

Please keep open the example project shipped with this post to look at the various classes. The concept here is that the class SomeClass needs the method RemoveSomething() called to bring the system in the previous state, the SomeDisposable has DoSomething() and Undo Something() function plus the standard Dispose, so we need to call a series of function in the reverse order to teardown the fixture. The above test does not work, there are too many problem with it. If the DoSomething(100) threw an exception, the teardown method will try to call test2.UndoSomething(), but this is an error, since if the DoSomething() threw an error, the corresponding UndoSomething() must not be called for my fixture. Moreover, if the test2.UndoSomething() throw an exception the teardown method will exit, and test2 and test1 objects get not disposed. The above situation is quite complex, we need a class that is capable to handle this complex fixture. First of all let’s define some delegates

public delegate void Proc();
public delegate void Proc<T1>(T1 param1);
 
public delegate TRet Func<TRet>();
public delegate TRet Func<TRet, T1>(T1 param1);

With these delegates I can write a class that will handle a fixture made of distinct steps. Each step has a setup and teardown function, and for each setup function that is called without exeption I want the corresponding teardown method to be called.

public class FixtureHandler : IDisposable  {
 
   
private readonly Dictionary<Proc<FixtureHandler>, Proc> mFixture;
   
private readonly Stack<Proc> mDisposeProcedures;
   
private readonly Stack<IDisposable> mDisposeList;

Â

This class is Disposable and store a mFixture dictionary of delegates, the key is the setup function and the value is the teardown function, the setup function should return the object created by the step or null if the step really does not create any object. Then I declare two stack, one for the teardown function that I need to call at the disposing of the fixture, and another that keeps track of all disposable objets that gets created by the various setup functions. The setup part of the step is a function that returns void and accept a parameter of type FixtureHandler. To fully understand how this class work here is the SetupMethod.

public void AddFixtureStep(Proc<FixtureHandler> setup, Proc teardown) {
   mFixture.Add(setup, teardown);
}
 
public void AddDisposable(IDisposable objToDispose) {
   
if (objToDispose != null)
      mDisposeList.Push(objToDispose); 
}
 
public Boolean SetUp() {
   
foreach (KeyValuePair<Proc<FixtureHandler>, Proc> element in mFixture) {
      
try {
         element.Key(
this);
         mDisposeProcedures.Push(element.Value);
      }
      
catch (Exception ex) {
         
Console.Error.WriteLine(“Exception during Setup: {0}”, ex.ToString());
         
return false;
      }
   }
   
return true;
}

Â

First of all we have two methods, one that adds a step and the other that adds a disposable object to the handler. This method cycles for all defined steps, for each one invokes the setup function (the key of the dictionary) passing itself as parameter, then pushes the teardown function (The value of the element of the dictionary) into the stack that contains the list of teardown procedures. With this scheme we can be sure that for each Setup step that succeeds, the corresponding teardown step is recorded. If an exception is thrown during the setup phase, the corresponding teardown function gets not added to the list and the method returns false to signal that the setup phase is failed. The dispose method of the FixtureHandler class takes care of all the teardown tasks

Â

public void Dispose() {
   TearDown();
}
 
public void TearDown() {
   
while (mDisposeProcedures.Count > 0) {
      
try {
         mDisposeProcedures.Pop()();
      }
      
catch (Exception) { }
   }
   
while (mDisposeList.Count > 0) {
      
try {
         mDisposeList.Pop().Dispose();
      }
      
catch (Exception) { }
   }
}

As you can see all teardown delegates are called in reverse order, since a stack is basically a LIFO structure, then after all teardown methods are executed the FixtureHandler cycles into the stack of Disposable object and dispose everything. Each operation is wrapped in try..catch block, in this way if a teardown function throws an exception we can be sure that the other teardown methods will be called as well the object will get disposed. Now we can build a real strong test.

[TestFixture]
public class Example2 {
 
   
SomeClass test1;
   
SomeDisposable test2;
   
SomeDisposable test3;
 
   [Test]
   
public void Test2() {
 
      
using(FixtureHandler fixture = CreateFixture()) {
         
Assert.That(fixture.SetUp(), “FixtureSetupFailed”);
         
//ExerciteSut
      }
   }

private
 FixtureHandler CreateFixture() {
   
FixtureHandler fixture = new FixtureHandler();
   fixture.AddFixtureStep(
      
delegate(FixtureHandler fixtureHandler) {
         test1 = 
new SomeClass();
         test1.Init();
         test1.AddSomething();
      }, 
delegate() {
      test1.RemoveSomething();
   });
   fixture.AddFixtureStep(
      
delegate(FixtureHandler fixtureHandler) {
         test2 = 
new SomeDisposable(1);
         fixtureHandler.AddDisposable(test2);
         test2.DoSomething(100);
      }, 
delegate() {
      test2.UndoSomething();
   });
   fixture.AddFixtureStep(
   …
   …

First of all with anonymous delegate we can easily build each step of the fixture, the test itself is very clean because the FixtureHandler object implements IDisposable, so it can be used in a using block. The Setup() method of the FixtureHandler returns a Boolean, so we put an Assertion on it, in this way if the setup part of the fixture gone wrong, the test will fail in predictable way. When we deal with disposable object, as for the SomeDisposable class, the setup function should call the AddDisposable() method of the FixtureHandler soon after the object gets created, in this way if the DoSomething() throws an error, the FixtureHandler can dispose the object correctly.

Alk.

About ParameterMarkerFormat

Some time ago I wrote a post about a generic data access helper based on an article of Ayende. In that article I did a mistake in the use of ParameterMarkerFormat and I think that is time to correct it. In that article I showed a little routine to get the parameter name based on type of provider, but I used in wrong part of the code. This is the correct function AddPArameterToCommand

public static void AddParameterToCommand(
   
DbCommand command,
   
DbProviderFactory factory,
   System.Data.
DbType type,
   
String name,
   
object value) {
 
   
DbParameter param = factory.CreateParameter();
   param.DbType = type;
   param.ParameterName = name;
   param.Value = value;
   command.Parameters.Add(param);
}

As you can see this is only a wrapper to create and configure the command with a DbProviderFactory, in the old code the name of the parameter is created with the GetParameterName, this was not correct since the name of the parameter is the same for all provider. What changes is the query itself, while in sql server I create a query like “SELECT COUNT(*) FROM Customers WHERE City = @city” in oracle the same query whould be “SELECT COUNT(*) FROM Customers WHERE City = :city”. The name of the parameter object is city in both case, but the name of the parameter in query text is different. The ParameterMarkerFormat should serve this purpose, it should be @{0} for sql server and :{0} for oracle so you can build your query dynamically and use the correct name of the parameter for each provider. The problem is that sql server returns {0} for ParameterMarkerFormat and obviously this does not work. I want this test to pass:

[Test]
public void TestDbHelper() {
   
Int32 CustomerCount = Nablasoft.Helpers.DataAccess.ExecuteScalar<Int32>(
      
delegate(DbCommand command, DbProviderFactory factory) {
         command.CommandType = System.Data.
CommandType.Text;
         command.CommandText = 
“SELECT COUNT(*) FROM Customers WHERE City = “ +
            
DataAccess.GetParameterName(command, “city”);
         Nablasoft.Helpers.
DataAccess.AddParameterToCommand(
            command, factory, System.Data.
DbType.String, “city”, “London”);
      });
   
Assert.AreEqual(6, CustomerCount);
}

This test is run against a northwind database, as you can see the query text is build dynamically thanks to the DataAccess.GetParameterName() helper function. In a real production code all queries should be created in advance, to avoid concatenating strings each time a query is to be run. For this code to work you need to find the right format of the parameter name from the command, here is the code.

private static String GetParameterFormat(DbCommand command) {
 
   
if (!mParametersFormat.ContainsKey(command.GetType())) {
      mParametersFormat.Add(
         command.GetType(),
         command.Connection.GetSchema(
“DataSourceInformation”)
            .Rows[0][
“ParameterMarkerFormat”].ToString());
   }
   
return mParametersFormat[command.GetType()];
}

As you can see I use a simple dictionary called mParametersFormat that cache the formats to avoid calling the slow function Connection.GetSchema, this is not enough for this code to work, because sql server returns me the wrong {0} format, the obvious solution is create a static constructor that preload my cache,

static DataAccess() {
   mParametersFormat = 
new Dictionary<Type, String>();
   mParametersFormat.Add(
typeof(SqlCommand), “@{0}”);
}

This makes the above test to work, fixing the bug of sql server. I searched a lot in the internet for this issue but seems that ParameterMarkerFormat is a strange beast not used by many people, as you can see it is more simplier adding all your needed format in static constructor without use the PArameterMarkerFormat at all.

Alk.