One of the greatest problem in Software industry is the rapid change in Technologies and Tools, a change so rapid that your knowledge become stale or obsolete before you have the time to fully master your new toys. In this scenario the major problem is how to evaluate some new “stuff”, while not losing money and, most important, how to understand if the new “stuff” is really better than the old one so it worth continue spending time on it and not moving on other newer “stuff”.

In traditional long term planning approach, like waterfall, this type of evaluation is really hard to obtain. You have several options, but no one is really good. Suppose you have a six month project, and you heard of some new “stuff” that seems promising, how you should handle this new “stuff”?

Waterfall or traditional approach

imageGain a complete knowledge of the “stuff” before starting the project: With this approach you assign some tasks to resources for learning the new “stuff”. While people are doing Requirement Analysis, some resources starts learning and when it is time to outline the architecture of the software, they should decide if the new “stuff” should be used in the project. The major drawback of this approach, is that you had no real experience of how this new Stuff will behave in your specific project with your real requirements, the decision will be made with incomplete information and the risk is discovering in the middle of the project, that the new “Stuff” actually is worst than the previous one.

Learn as you go, planning for a little delay in the project due to the introduction of the new “stuff”: with this approach you actually believe that the new “stuff” is useful, probably you got recommendations from other teams. Armed with this knowledge you decide to introduce it in your project while accommodating a little delay in development phase due to the time needed by developers to learn the new “stuff”. This is a risky path, because what happens if in the middle of the project something went wrong with the new stuff? How much delay you can expect and most important, how you can justify the delay to the customer? If at the end of the project the team will tell you that the new “stuff” is not good you probably wasted a great amount of time.

Use the new stuff in little part of the project, not a critical one, to evaluate it: This is the most conservative approach, you design your product to use the new “stuff” in a limited and not critical part. This will give to the development team the ability to learn it on a real Project, but if the new “stuff” proven to be bad for the team, risks are mitigated. The drawback is that you are evaluating the new stuff in an unimportant area. If everything went good no one can assure you that it will works for the critical part of the software.

All these approaches suffers of several problems, but the most dangerous is: after the project is finished, (usually with delay in schedule) it is difficult if not impossible to answer the question: the introduction of the new “stuff” helped the team in the project? Some team member will answer Yes, some other No, because after such a long time and in projects that quite are late on scheduling, it is nearly impossible evaluate the new “stuff” in isolation and it is easy to blame the new “stuff” as the root cause  Ex: “we are late, but we introduced XXX and YYY so the estimation was incorrect due to missing knowledge of these new tools”.

Scrum to the rescue

image

If you manage your project with Scrum the situation is really different. First of all you usually introduce the new “stuff” because there is some evidence that it can be useful to solve a User Story of the current Sprint (or one that probably will be done in couple of sprint). Maybe the customer asked for some advanced search functionalities, with OR AND and advanced syntax and the team knows that there are some tools based on Lucene (Es Elastic Search) that can tremendously helps the team in implement the User Story. If you start evaluating some new “stuff” on a real User Story you have two great advantages: you start using it where you really need it and you start using on a limited amount of the project. If the new stuff is really Wrong, at least some user stories will slip in the next Sprint, this minimize the risk.

Another advantage of Scrum is Sprint Retrospective; when the team answer couple of  questions: What went well and What could be improved. Scrum process gives you the opportunity and the time to discuss on the new “stuff” as soon as it is is used to accomplish real Useful Work, giving you honest and useful feedback. After a new “stuff” is introduced, at the end of the first sprint the team probably still not mastered it, but they can probably know if it worth spending more time or if it was a waste of time. Sprint after sprint the knowledge of the new “stuff” improves and the team can refine the judgment. In each sprint you can extend the use of the new “Stuff” in other areas of the project if the team think that it can help implementing User Stories. This will permits to use the new “stuff” where you really need it and introduce it gradually in your team. The key part is that the evaluation is based on how the new “stuff” helped the team to implement User Stories and satisfy the customer.

In a rapid evolving industry, like software development, using Scrum is a really good solution to constantly verify new technologies in your team, with little risks, but with real world feedback at the same time.

If you do not know what Scrum is, or you want to refine knowledge on it I suggest you to read the guide at http://www.scrum.org as well reading some introductory book Es: Software In 30 Days.

Gian Maria.

Tags:

No comments

Coded UI Tests are a specific type of UI testing introduced with Visual Studio 2010. You can create your first Coded UI test following simple instruction from MSDN documentation. Most of the introductory examples shows you how you can use the Recorder tools to record interaction with a software (Web, WinForm, Wpf. etc) to generate what is called a UiMap. An UiMap is nothing more than a big Xml files where the recorder records the interaction with the UI and a bunch of automatic generated classes to interact with the UI.

Using a UiMap is probably not the best option for large projects, because the cost of maintaining it could become really high. This is usually not a big problem, because UiMap is used to generate code based on a set of classes belonging to Visual Studio Testing Framework that makes possible to interact with a UI from code. If maintaining a UiMap is difficult for you you can directly use these classes in your test. To show you the “hello world” equivalent of CUIT, here is the code needed to open a page in a browser and click an hyperlink.

using ( BrowserWindow browserWindow =
            BrowserWindow.Launch
            (
                new System.Uri("http://tailspintoys.azurewebsites.net/")
            ))
{
               
    HtmlHyperlink link = new HtmlHyperlink(browserWindow);
    link.SearchProperties.Add
        (
            HtmlHyperlink.PropertyNames.InnerText,
            "Model Airplanes"
        );
    Mouse.Click(link); 
}

The code is really simply, you must use the BrowserWindow.Launch static method to create an instance of BrowserWindow class pointing to a given Url. The BrowserWindow class is a wrapper defined Visual Studio Coded Ui assemby used to abstract the interaction with a web browser. The next step is locating the hyperlink you want to click, operation that can be accomplished with the HtmlHyperlink object. This object derives from the UiTestControl base class, and abstracts the concept of a control in the User Interface. The constructor of HtmlHyperlink object needs an instance of a containing control, in this example the whole browserWindows object. The need for the Container is having a root control that will be searched for the control.

To specify the exact Hyperlink control you want to interact with, you should populate SearchProperties collection, specifying the criteria you want to use. In this example I used the InnerText property, but you can use a lot of other criteria. Thanks to PropertyNames static collection of HtmlHyperlink object you can enumerate all the properties that can be used to locate the control. Inner Text is not usually the best option, using unique Id is usually a better approach, but the key concept is: You should use the criteria that is most stable in your scenario/environment. If you can ask to development team to assign unique id or unique names to each control, tests will be more robust and quicker.

Once SearchProperties collection is filled with critera, you can interact with the control, accessing properties or passing it to Mouse.Click method to simulate a click. CodedUI engine will locate the control on the page only when You will access properties or pass the control to some method that interact with it. This is really important, until you do not access properties the engine will not try to locate the control on the UI.

Remember to enclose the BrowserWindow object in a using block, this will ensure that the instance of the browser opened during the test will be always closed. This prevents multiple browser windows to remain opened after the test if some exception occurred.

Gian Maria.

Tags: ,

No comments

I’ve already covered installation of Monitoring Agent for Visual Studio Online Application Insights. Actually the service is in preview and the setup experience is changed from the first one. At date of Today, during installation phase, setup asks you the type of agent you need, but it does not asks you if you want to automatically monitor all of your web application. After the installation you can configure Microsoft Monitoring Agent from Control Panel

image

Figure 1: Configuration of MMA

From this configuration panel you should insert Account ID and Instrumentation Key that you find in your Visual Studio Online account. To find these values, you should simply Login in VSO and go to your Application Insights hub, then you press the little “gear” icon in upper right to configure Application Insights. In that section you have an apposite section called Keys & Downloads.

image

Figure 2: Configuration page with all needed data for Microsoft Monitoring Agent.

Once MMA is configured correctly, you can start monitoring a web site with this simple command.

 Start-WebApplicationMonitoring -Name WebSiteName -Cloud

This simple command will create a file called ApplicationInsights.config in the web site folder with default values to enable Monitoring of the application. Actually this is not the preferred way to do this, because you should install the appropriate Visual Studio Add-in that allow you to simply add telemetry configuration to your site directly from Visual Studio.

image

Figure 3: Add Application Insights telemetry to your projects.

If you use Visual Studio addin, or if you add the application via Start-WebApplicationMonitoring commandlet, I usually change the name of the application immediately. This can be done to the Applicationinsights.config file that is located in the site root

image

Figure 4: Change the component Name of the application

This is useful because I do not like to look at my application in Application Insights with the same name of the site in IIS. I like to monitor not only production application, but it can be useful to enable monitoring also of Test and Pre-Production deployment. With such scenario the ability to change the name that I see in Application Insights is a key value. As you can see, in the above picture I changed the name to AzureVM – TailspinToys (in iis the web site is simply named TailspinToys). After a couple of minutes Application Insights starts to have data.

image

Figure 5: Data starts flowing to Application Insights.

Gian Maria.

Tags:

No comments

If you want to upgrade Release Management for TFS 2013 to Update 1 you surely noticed that there is no Update 1 upgrade package, but you should first uninstall the old version of Release Management and the install again the version with Update 1.

While this does not delete any previous settings and simply upgrade the database to the new structure, it is possible that after upgrading when you try to connect with the Release Management Client you get and error telling you that the Release Management Server is not working. Before starting panicking for your installation, you should check if you erroneously choose the Https protocol instead of HTTP

image 

Figure 1: Release configuration Manager Server Update 1 Configuration Panel

If you compare this configuration panel with the standard one of Release Configuration Manager without Update 1 you can notice that there is no Https option.

image

Figure 2: Release Configuration Manager Server Configuration panel (without Update 1)

Support for Https was introduced with Update 1 and it is the default on the configuration option (see Figure 1) but if you are upgrading, your old installation did not used https (because the option was not supported).

You should change back the configuration to standard Http and everything should work. So please pay attention when you are upgrading Release Management to Update 1 to choose http and not https during configuration if you do not want to break any client configuration.

Gian Maria.

Tags: ,

No comments

If you are used in installing Solr in Windows environment and you install for the first time a version greater than 4.2.1 you can have trouble in letting your Solr server to start. The symptom is: service is stopped in Tomcat Application Manager and if you press start you got a simple error telling you that the application could not start.

To troubleshoot these kind of problems, you can go to Tomcat Log directory and looking at Catilina log, but usually you probably find a little information there.

Mar 06, 2014 7:02:07 PM org.apache.catalina.core.StandardContext startInternal
SEVERE: Error filterStart
Mar 06, 2014 7:02:07 PM org.apache.catalina.core.StandardContext startInternal
SEVERE: Context [/solr47] startup failed due to previous errors

The reason of this is a change in the logging subsystem done in version 4.2.1, that is explained in the installation guide: Switching from Log4J back to JUL. I’ve blogged about this problem in the past, but it seems to still bite some person so it worth spending another post on the subject. The solution is in the above link, but essentially you should open the folder where you unzipped solr distribution, go to the solr/example/ext and copy all jar files you find there inside Tomcat Lib subdirectory.

image

Figure 1: Jar files needed by Solr to start

After you copied these jar files into Tomcat lib directory you should restart Tomcat and now Solr should starts without problem.

image

Figure 2: Et Voilà, Solr is started.

Gian Maria.

Tags:

No comments