WordPress and permalink make your url case sensitive

This morning I browse some stats that google made on my blog, and saw that there are some 404 (Not Found) page. After some inspection I see that all the 404 have the form

http://www.nablasoft.com/Alkampfer/xxx

They have capital A for Alkampfer, if I changed the Alkampfer in alkampfer everything work……the strange thing is that I perfectly remember that it does work when I first setup the blog. After some inspection I found that the problem is due to the permalink, when you activate permalink in wordpress the url became case sensitive.

A solution can be found here.

Now link like these http://www.nablasoft.com/Alkampfer/?p=105 or http://www.nablasoft.com/AlkaMPFer/?p=105 works perfectly. It seems to me that it real better to have an url not case sensitive.

alk.

Overriding the Equals Method for object used as a key in hashtable

Overriding the Equals method is not a stuff for the fainted-hearts; it could seems absolutely a simple thing to do, but there are a lot of subtleties that had to be keep in mind.

Here is a typical implementation of Equals for a Customer entity made by Resharper (R# permits you to define equals with a single click)

public class Customer : IEquatable<Customer> { public String Name { get; set; } public String Surname { get; set; } public bool Equals(Customer obj) { if (ReferenceEquals(null, obj)) return false; if (ReferenceEquals(this, obj)) return true; return Equals(obj.Name, Name) && Equals(obj.Surname, Surname); } public override bool Equals(object obj) { if (ReferenceEquals(null, obj)) return false; if (obj.GetType() != typeof (Customer)) return false; return Equals((Customer) obj); } public override int GetHashCode() { unchecked { return ((Name != null ? Name.GetHashCode() : 0)*397) ^ (Surname != null ? Surname.GetHashCode() : 0); } }

It seems really reasonable, is a standard implementation that uses the two properties of the Customer Object. The GetHashCode must be also overridden, because the rule is that “If two object are equals, they must have the same hashcode”. R# use a simple strategy, it use the GetHashCode from the base properties used in equality, multiplying with 397 to avoid Hash pollution. This makes you sure that if two objects are equal, their hashcode will be equal too.

Then run this snippet of code.

Hashtable table = new Hashtable(); table.Add(c,c); Console.WriteLine("table.ContainsKey(c)={0}", table.ContainsKey(c)); c.Name = "STILLANOTHERNAME"; Console.WriteLine("table.ContainsKey(c)={0}", table.ContainsKey(c)); Console.WriteLine("table.ContainsValue(c)={0}", table.ContainsValue(c)); Dictionary<Customer, Customer> dic = new Dictionary<Customer, Customer>(); dic.Add(c, c); Console.WriteLine("dic.ContainsKey(c)={0}", dic.ContainsKey(c)); c.Name = "CHANGEDAGAIN!!!"; Console.WriteLine("dic.ContainsKey(c)={0}", dic.ContainsKey(c)); Console.WriteLine("dic.ContainsValue(c)={0}", dic.ContainsValue(c)); SortedDictionary<Customer, Customer> dic2 = new SortedDictionary<Customer, Customer>(); dic2.Add(c, c); Console.WriteLine("dic2.ContainsKey(c)={0}", dic2.ContainsKey(c)); c.Name = "CHANGED ANOTHER TIME!!!"; Console.WriteLine("dic2.ContainsKey(c)={0}", dic2.ContainsKey(c));

What does it print?? for the hashtable the result is

table.ContainsKey(c)=True
table.ContainsKey(c)=False
table.ContainsValue(c)=True

It is perfectly reasonable, we add the same customer object as key and value, but then we change one of its property, its hash code change because the name property is changed, so ContainsKey(c) returns false because an hashcode comparison is done, but ContainsValue(c) is true, because it is implemented with a simple iteration where each contained object is compared with equals with one passed as arguments.

When you override Equals and GetHashCode() in such a way pay a lot of care in using object as a key into an hashtable, or keep in mind that when you change a property of the Customer, its hashCode changes and then key comparison is used against the old hashcode.

But what is the output of the remaining of the snippet?

dic.ContainsKey(c)=True
dic.ContainsKey(c)=False
dic.ContainsValue(c)=True

dic2.ContainsKey(c)=True
dic2.ContainsKey(c)=True

This is due to the fact that a Dictionary<K, T> internally use hashcode, while the SortedDictionary use some sort of tree (maybe a red-black or an AWL) and it’s not affected by how GetHashCode is implemented. This can lead to problem, because the behavior changes with the inner implementation. When you use HashTable you know that the hash code is used…but for Dictionary you could be surprised.

An object that redefine equality operator should be used as key in IDictionary object with great care.

A possible solution is to implement a special IEqualityComparer<Customer>.

public class CustomerReferenceComparer : IEqualityComparer<Customer> { static System.Reflection.MethodInfo getHash = typeof(object).GetMethod("InternalGetHashCode", BindingFlags.Static | BindingFlags.NonPublic); #region IEqualityComparer<Customer> Members public bool Equals(Customer x, Customer y) { return Object.Equals(x, y); } public int GetHashCode(Customer obj) { return (Int32)getHash.Invoke(null, new object[] { obj }); } #endregion }

It use a little hack, it calls the InternalGetHashCode from the class Object, so the hash is computed in standard way. This IEqualityComparer treats customer instances as they do have not redefined Equals and GetHashCode, because it relies on the implementation of the Object class. Now you can write this code.

Dictionary<Customer, Customer> newdic = new Dictionary<Customer, Customer>(new CustomerReferenceComparer()); newdic.Add(c, c); Console.WriteLine("dic.ContainsKey(c)={0}", newdic.ContainsKey(c)); c.Name = "CHANGEDAGAIN!!!"; Console.WriteLine("dic.ContainsKey(c)={0}", newdic.ContainsKey(c)); Console.WriteLine("dic.ContainsValue(c)={0}", newdic.ContainsValue(c));

And you do not need to worry about how the Customer redefines GetHashCode(), because you are giving to the dictionary yours IEqualityComparer<Customer> object that use your logic.

Alk.

Tags:

Lambda recursion, pay attention to performances

This morning I stumble across this old post, that shows how to create a recursive function with lambda. The article is very interesting and has a second part that deal with memoization. These two articles are really great ones, but I want to point out that you need really pay attention to performance each time you speak about recursion. This piece of code shows an interesting thing

class Program { private static Stopwatch sw = new Stopwatch(); static void TestSpeed(Func<int, int> f, int i, string msg) { sw.Reset(); sw.Start(); int res = f(i); sw.Stop(); Console.WriteLine("{0,-9}{1,7} = {2,10} => {3,8:f3} ms", msg, "(" + i + ")", res, sw.ElapsedMilliseconds); } public static void Main() { Func<int, int> fib = Extend.Y<int, int>(f => n => n > 1 ? f(n - 1) + f(n - 2) : n); TestSpeed(fib, 37, "fib"); Func<int, int> fib2 = null; fib2 = n => n > 1 ? fib2(n - 1) + fib2(n - 2) : n; TestSpeed(fib2, 37, "fib2");

I use the Y function described in that article, so the previous code creates two functions: the first, called fib, is the real recursive one, the other, called fib2, is the standard one that is not really recursive, as explained in the first article, if we look at the output we see that fib is much more slower than fib2.

fib (37) = 24157817 => 7807.000 ms fib2 (37) = 24157817 => 822.000 ms

The reason is that the Y function creates a recursive lambda inserting an intermediate delegate into the call chain, it is clear if you modify the code in this way

Func<int, int> fib3 = Extend.Y<int, int>(f => n => { StackTrace st = new StackTrace(); Console.Write("{0}-{1} ", n, st.FrameCount); return n > 1 ? f(n - 1) + f(n - 2) : n; }); TestSpeed(fib3, 5, "fib3"); Console.WriteLine(); Func<int, int> fib4 = null; fib4 = n => { StackTrace st = new StackTrace(); Console.Write("{0}-{1} ", n, st.FrameCount); return n > 1 ? fib4(n - 1) + fib4(n - 2) : n; }; TestSpeed(fib4, 5, "fib2"); Console.WriteLine();

I simply write the value of n and the frameCount of the stack, the result is

5-4 4-6 3-8 2-10 1-12 0-12 1-10 2-8 1-10 0-10 3-6 2-8 1-10 0-10 1-8 5-3 4-4 3-5 2-6 1-7 0-7 1-6 2-5 1-6 0-6 3-4 2-5 1-6 0-6 1-5

It is quite cryptic, but in the higher row you can see that the FrameCount is higher than the lower line, if instead of printing st.FrameCount you print st.ToString (the full stack) you obtain a lot of output. This is the part of the real recursive function

[4]- at ConsoleApplication1.Program.<>c__DisplayClass6.<Main>b__1(Int32 n) at ConsoleApplication1.Extend.<>c__DisplayClassb`2.<>c__DisplayClassd.<Y>b__a(A a) at ConsoleApplication1.Program.<>c__DisplayClass6.<Main>b__1(Int32 n) at ConsoleApplication1.Extend.<>c__DisplayClassb`2.<>c__DisplayClassd.<Y>b__a(A a) at ConsoleApplication1.Program.<>c__DisplayClass6.<Main>b__1(Int32 n) at ConsoleApplication1.Extend.<>c__DisplayClassb`2.<>c__DisplayClassd.<Y>b__a(A a) at ConsoleApplication1.Program.TestSpeed(Func`2 f, Int32 i, String msg) at ConsoleApplication1.Program.Main()

Compare it with the corresponding run of the fib4 (not real recursive lambda).

[4]- at ConsoleApplication1.Program.<>c__DisplayClass4.<Main>b__2(Int32 n) at ConsoleApplication1.Program.<>c__DisplayClass4.<Main>b__2(Int32 n) at ConsoleApplication1.Program.<>c__DisplayClass4.<Main>b__2(Int32 n) at ConsoleApplication1.Program.TestSpeed(Func`2 f, Int32 i, String msg) at ConsoleApplication1.Program.Main()

As you can see in the first listing, for each recursion step, we have two function calls in the stack, this is due to the Y operator used to create the real recursive lambda function.

public static Func<A, R> Y<A, R>(Func<Func<A, R>, Func<A, R>> f) { Recursive<A, R> rec = r => a => f(r(r))(a); return rec(rec); }

This function takes the original lambda and convert it into a real recursive one, but it creates another lambda that actually does the magic.

Let’s see with Reflector what happens, this is the disassembling for the fib2 (not real recursive function)

image

The compiler generate a class that holds the fib2 lambda and in the b__0 function it simply uses the delegate we defined. From this code you can see that the function is not really recursive, because b__0 simply invoke the fib2 delegate. A very different situation happens when we use the Y function, I let you use reflector to check generated code.

Remember also that recursion is a beautiful technique, but it is really slower than a non recursive algorithm, here is a standard implementation for the fibonacci that is not recursive.

Func<int, int> fib3 = n => { Int32 result = 1; Int32 previous = -1; for (Int32 num = 0; num <= n; ++num) { Int32 newFibNumber = result + previous; previous = result; result = newFibNumber; } return result; };

Compare the timing with the recursive ones

fib (37) = 24157817 => 7874.000 ms fib2 (37) = 24157817 => 829.000 ms fib3 (37) = 24157817 => 0.000 ms

The non recursive version is almost instantaneous.

alk

Tags:

We love our hardware, really true

As Jeff Atwood says “we love computers”, yes, we are developer, but in the end we choose this profession because is the one that make us possible to work with the object of our love…computer.

I’m one of those person that build his development machine, I want to choose every pieces of hardware, and this is mainly because I worked as a technician  during my university period, so I used to work with a lot of different hardware, and I cannot think to own a computer that is “packaged by someone else”, each part of my pc is there because I want it to be there.

I use P5KL asus motherboard, I’m a Three monitor guy but I usually have a standard motherboard, one Video card on the PCI express, and another in standard PCI slot, it goes really well.

For CPU same of Jeff, clock speed really matters, so I have a 3.0 GHZ dual core E8400, it has 6 MB L2 cache, and this is really good for speed.

Same for RAM, but I only have 2 GB, so I have 2x1GB Kingston DDR 800 (Kingston really is a reliable ram at affordable price)

Oh my god, same choice for disk :D, a Western Digital 10.000 RPM, mine is only 150 GB because I have another 150 7200 RPM as secondary disk. Having 2 drives permits me to set up virtual disk for my virtual machine on the secondary disk, performances are greater when you can divide data in two disks, even if one is slower.

For video card I choose a smaller Radeon X1550 coupled with a PCI radeon 7000. I remember the old days when I used to do 3D graphics, when I looked for the most expensive video card to have Pixel and Vertex shader to play with…now I really do not have time to do it again, so a smaller Video Card is enough for me. Yeah, it is true that GPU performs really better than standard CPU, but it is normal, since they are build to do specific task, but since I do not use graphic effects (I turned of my aero on Vista in my laptop) the only thing that I ask to my video card is not giving me problems 😀

Then I have a standard case with standard coolers, ecxept the fact that I usually put an hard drive dedicated cooler in each drive……the result is surely much more noisy than Jeff’s one. In the end I’m amazed how my pc is similar to Jeff’s one, and the reason that lead me to choose this particular hardware are the same.

Actually I’m planning to have a 15.000 SAS disk as my Birthday Gift :D, after all, I see that Visual Studio loves Quick Hard Drives.

alk.

Tags:

WCF services and silverlight

I’m doing the first steps with silverlight, and today I incurred in a very strange behavior. First of all I see that in my web project I can add WCF service or Silverlight-enabled WCF services. My first thought was…”why a wcf service should be different for silverlight”?

The only difference is that with a silverlight enabled wcf you will get a service already configured in a very specific way. First of all it does not use the separation of the interface, so you have only the service class like an old asmx service, moreover it configures web.config to use basicHttpBinding, because silverlight does not support ws*, but the most important thing regards asp compatibility mode.

I deployed in my site a WCF service created in another Dll, I created .svc file, configured the web.config to use basicHttpBinding and called it from my silverlight application. The service use a Repository pattern based on nhibernate, so I have several session manager used in various scenario. The web is the simplest one, because you can use the famous session per request pattern. It turns out that my session manager seems not to work anymore, after a close inspection I found that HttpContext.Current is null inside the WCF service, so the session manager is not working properly.

This is a standard behaviour, because a WCF service has nothing to do with the concept of request, the fact that it is hosted by IIS indicates only that you want an endpoint in IIS. Since a WCF service can be hosted in a simple console, it has no sense to assume that it should have an HttpContext, even when hosted from IIS. As it is explained in this article, you can request IIS to host the service in “Asp.NEt compatibility mode”, that is the default if you create a silverlight-enabled WCF service. This is obtained through a configuration in web.config

<serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>

But this only enable the compatibility mode for the site, now each service should declare if he can run in this mode, to make a service compatible you can declare its implementation in this way.

[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class TestServer : IExtTest

If you specify Allowed, the service will run in compatibility mode or not, if you specify Required or NotAllowed the service can be run only if the compatibilityMode is enabled or disabled respectively.

Alk.

Tags: