Slow tests with nunit 2.4 and nhibernate

I noticed that when I used nunit 2.4 test runner it is really slower than 2.2. The reason is that in 2.4 the nunit test runner will use log4net as default logger, and if you do not disable logging, you will see in log tab an enormous amount of text.

The reason is that nunit used default log level of “DEBUG”, and this in turn means that nhibernate will run with full logging enabled, and this is a really waste of time because nhibernate really log everything with DEBUG level. The solution was (until version 2.6) to disable the log tab of the test runner, but in subsequent version this behavior seems to be corrected. So upgrade to the latest version if you experience slow tests with nhibernate and nunit 2.4.x where X < 8 :D.


Favor small and frequent checkin over big ones.

This is a rule that I try to adopt since long time in the past, and few days ago Jeff Atwood enforces this concept in his blog. I completely agree with him, code should be checked in often, especially when you have continuous integration server. Checking in often reduce the risk of conflicts, makes tests run often (you should have setup your continuous server to run all tests for each checkin) and makes integration simplier. Benefit of frequent checkins are

  • Other developers are immediately aware of yours modifications, you can have immediate feedbacks
  • As I said tests are run often with code of other developers (think to the scenario where you finally merge your changes and with pain you verify that a lot of tests are now failing)
  • if some code path is gone wrong, you can simply revert local changes and begin again from a good starting point.

I usually follow the pattern of “implement something new, run all tests, update local files again to check for conflicts, resolve any conflict, then commit with a comment telling the reason of the commit”. When I correct bugs, I correct a single bug, then run the tests, update, resolve conflicts and commit with a comment that tells the number of the bug that was corrected.

But sometimes programmers are going into big changes of the code, they begin to change a lot of files, and tends not to checkin until the work is finished, this is wrong. One possible solution is creating a branch, you can checkin often without the risk to break the build, you can checkin incomplete code (the only condition is that it compiles), you can watch out the change on the corresponding files on the trunk, so you can merge often from the trunk to your branch, and finally when you are done with the change you can merge last stuff and move into the trunk the new code. The greatest advantage of this approach is that the other programmers are continuously aware of your work. Suppose programmer A find a bug in class Foo, he can correct the bug on the trunk and immediately make the same correction on all active branches.

Another approach can be taken when you use IoC in your application. Suppose that I need to radically refactor component Foo that implements the interface IFoo. I simply begin to create another component called Foo2 or FooBetter or whathever you like, I develop it, test it, and when I think that it is finished I simply change configuration file to use the new component. When everyone says that the new Foo is ok, then I can delete the old Foo. The good thing is that I can run all test suites regarding the old Foo class against the new one, until all the tests passes.

Both these approaches are better than keeping local changes for too long and doing a final big chekin. Remember the rule, favor small and frequent checkins over big and infrequent ones.




Adobe Acrobat, new is better or worse?

I’m working with OpenXml format of office 2007, so I keep the reference document open because I constantly need to search information into the document. The reference document is a 5200 huge 37 mb pdf, when I open it in Adobe reader 8 the acrobat process uses 60MB of ram, when I start a search of a text the search is incredibly slow, acrobat begin to use more memory. To make a comparison I try to search the text youcannotfoundThis and after 30 seconds acrobat is still at page 500, and the memory consumption is 125 mb.

Then I goes into this site and downloaded the version 4.05. With this really old version of acrobat reader I open the document, and did the same search for word youcannotfoundThis, the result is that it reach page 500 almost immediately, so it is really faster than the latest version, but after 10 seconds it reaches page 2500 and the search become really slow, about 10 pages for every second. The memory consumption is really low, around 20 mb, but I cannot search after page 2.500 because the search speed drops almost to zero.

I tried acrobat 5.05, it is slower than 4.05, but still much faster than 8.0, but again when it the search reach page 2.500 it hangs again. in 30 seconds the search reachs page 2.300 so it is faster than 8.0. Memory consumption is 22mb, but after page 2.500 search speed still drops to zero. When I open the document acrobat 5.05 tells me that the document use some newer features of pdf, so I need to use the latest version of the reader.

Then I check acrobat site and verify that the latest version is the 9.0, I download it and try again to open the document, same stuff, is a little faster than 8.0, but in 30 seconds the search reached only page 650.

I really dislikes this, I think that the average user uses acrobat reader to open, read, search and print documents, and all these features are supported by 4.0 version. I really does not know the newer features of the latest versions, but the only thing that I notice is that newer versions of acrobat reader are really slower. I think that it worth nothing adding new features if the old basic ones become less usable. Now if I search a term that have the first match in page 2000 I need to wait almost two minutes for acrobat to find the match, while version 4.0 found it in 17 seconds. I prefer adobe to take 4.05 version and make it not hang with the new version of pdf instead of making newer and slower versions.


Really strange error in production server, a tale of named pipes

I have a production server with an application, it run for months without errors. 10 days ago the elmah page begins to shows an error that occurred in a lot of pages.

The application uses a standard Data access Layer, but a module uses nhibernate, and it turns out that all the errors happened when a page use some function in this module. I tried to access some pages in the site that uses nhibernate, and every page raises an error of type.

System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> NHibernate.ADOException: cannot open connection ---> System.Data.SqlClient.SqlException: An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at NHibernate.Connection.DriverConnectionProvider.GetConnection()

The astonishing thing is that all parts that use standard goes well, only nhibernate suffers from this error. After an inspection on web.config I see that the only difference is that dal uses this connection string.

Database=databasename;Server=localhost\instancename;Integrated Security=SSPI

while nhibernate uses this connection string

Database=databasename;Server=\instancename;Integrated Security=SSPI

Clearly I think that using localhost or does not make any difference, but when I changed to localhost, the error disappeared and the application works perfectly….when I revert to the exception comes back again, so it is the cause…but why??

After a little search I stuble upon this post, that explains that (local) is not the same as localhost, the first uses named pipes and the latter use tcp. Since my exception message told me that the error is in the named pipe, it turns out that using probably uses named pipes instead of tcp.

Now the error seems gone away, but I wonder

1) why the software ran fine for almost one year without problems (maybe some windows update?)

2) why using should be different from using localhost in the connection string. (It violates the principle of least surprise)

3) why if I reboot the server the named pipe starts to work again….but almost after some days it begin not to work anymore until I reboot the machine (I try this on a preproduction site used to test the server).


Technorati Tags: ,


Remove comment moderation

I decided to remove comment moderation for the blog. I used it mainly to be sure that no spam pass the filter, but after some months of testing I verify that Akismet really does a good work, intercepting all the spam comments.

Moreover I setup the WP-reCAPTCHA plugin to add a CAPTCHA in the end of the post, making life harder for spammers. I really like to have moderation removed, primarily because I have to check it often to be sure that no comment gets lost, moreover I like that readers can immediately see comments in the post. A blog is dead without comments enabled.