Analyze your project with SonarQube in TFS Build vNext

When you have your SonarQube server up and runinng it is time to put some data into it. You will be amazed to know how simple it is with build vNext and Visual Studio Online.

Installing the analyzer

As a prerequisite, you need to install Java on the machine where the agent is running, then download the Msbuild.SonarQube.Runner, unblock the zip and unzip everything in a folder of your HD. Make sure that PATH contains that folder so you are able to launch the runner from any command prompt.

Then open the SonarQube.Analysis.xml file, and change configuration.

Figure 1: Configuration of Msbuild SonarQube runner

Remember that you need to open firewall port of the server where SonarQube database is running, because the agent will connect directly to the database to save result of the analysis. This is a common source of errors, because you can incorrectly think that the agent is capable to talk with the server through an endpoint (it will be available in next versions of SonarQube)

Pay attention because the Agent directly save result on Database.

Remember also to install the C# plugin or whatever plugin you need (Ex Visual Basic .NET) to support the language/technology you are using.

Manual analysis

On the machine where the vNext agent is running you need to be sure that everything is ok. Just open a command prompt and navigate on a folder where you have the project you want to analyze (you can use _work folder of the agent). Once you are in a folder where you have a .sln file you should start analysis with this command.

msbuild.sonarqube.runner 
	/key:JarvisConfigurationManager 
	/name:JarvisConfigurationManager 
   /v:1.1

This command will connect to SonarQube server and will download analysis profile. Then you should launch msbuild to compile the solution and finally you should do the real analysis running the end command

msbuild.sonarqube.runner /key:JarvisConfigurationManager end

Be sure to verify that everything works with manual analysis, because it will require less time to troubleshoot problems

If everything goes well you should see some data inside your SonarQube server. Doing manual analysis is a must so you are sure that Java is installed correctly, firewall ports are ok, DNS names are ok and so on. Once you can do a manual analysis you are 99% sure that the analysis will be good even during the build.

Running in Buid vNext

If everything is ok, I just suggest tagging the agent with SonarQube tag, to identify this agent as capable of doing SonarQube analysis.

With custom capability we can identify the agents that can do specific tasks

Figure 2: Adding custom capability to the agent

Now the build must be changed to require this specific capability for the agent.

Figure 3: Adding Demands on the build to request specific capabilities

Using custom capability is a good way to communicate to people that someone did manual testing of SonarQube runner on that machine, so you can be pretty sture that the build will not encounter problems.

Using custom demands will make your life easier because you are explicitly telling what that agent can do.

Now you can customize the build to launch the above two command line script to do the analysis, as you manually did before. You can do similar steps if you are using XAML Build, just add a script to launch start analysis pre build and the end after tests ran.

But if you are using build vNext you will be happy to know that SonarQube runner tasks were already present in VSO/TFS vNext.

 

Figure 4: Configure SonarQube analysis in your build.

Only begin analysis task needs configuration, and you needs only to specify the same informations you saved in SonarQube.Analysis.Xml file. Since I’m using the build where I’ve configured Semantic Versioning with GitVersion I’ve also a build variable called AssemblyVerision that is automatically set by GitVersion and I can use it to specify the version to SonarQube.

I can now schedule a build, and verify the output. First of all the output of the Begin Analysis task should connect correctly to the server and download the profile.

Figure 5: Output of the Start task for SonarQube Analysis

The output of the end step, should contains a really longer log, because it is when the Real Analysis is done on your code.

Figure 6: Analysis took 45 seconds to complete

It is important that the end analysis task is the last one, because sonar analyzer is capable of understanding code coverage result from your unit testing, a metric that is controversial, but gives you a nice idea on the amount of Unit Testing that the project contains.

Figure 7: Code coverage result is correctly saved in Sonar Qube

Thanks to automatic versioning, you have also a better timeline of the status of your project.

Figure 8: Versioning correctly stored inside Sonar Qube

The entire setup should not take you more than 30 minutes.

Gian Maria.

Installing SonarQUBE on windows and SQL Server

Installing SonarQube on windows machine with Sql Server express as back end is quite simple, but here is some information you should know to avoid some common problem with database layer (or at least avoid problem I had :) )

Setting up Sonar Qube in Windows is easy, but sometimes you can encounter some problem to have it talk to Sql Server database.

First of all I avoid using integrated authentication in SQL Server, because I find easier to setup everything with a standard sql user. After all my instance of SQL Express is used only for the instance of Sonar. So I create a user called sonar with a password, and remove the check for password expiration.

Figure 1: Avoid password expiration

Then you should open Sql Server Configuration Manager, and you must enable the TCP Procol.

Figure 2: Enable TCP/IP protocol in Sql Server Configuration Manager

Now I found that Java drivers are a little bit grumpy on connecting with my database, so I decided to explicitly disable dynamic port and specify the 1433 port directly. Just double click the TCP/IP protocol shown in Figure 2 to open properties for TCP/IP protocol.

Figure 3: Disable dynamic ports and specify 1433 as TCP Port

Now create a new database called Sonar, and set user sonar as owner of the database. Place specific attention to case of Database Name. I choose Sonar with capital S.

Figure 4: Create a new database called Sonar with sonar user as owner

Now be sure to select the correct Collation, remember that you should use a collation that is Case Sensitive and Accent Sensitive, like SQL_Latin1_General_CP1_CS_AS.

Figure 5: Specify the right Collation for the database. It should be CS and AS

Now, just to be sure that everything is ok, try to connect from Management Studio using the port 1433 and with user sonar. To specify port you should use a comma between server name and the port.

Figure 6: Try to connect to the server with user sonar and port 1433

Verify that you can see Sonar database. If you are able to connect and see Sonar Db you have everything ready. Remember to download JDBC driver for MsSql at this address, once downloaded be sure to right click the zip file and in properties section unblock the file. Then unzip the content and copy the file jtds-1.3.1.jar into subfolder extensions\jdbc-driver\mssql of your Sonar Installation (the mssql folder usually does not exists and you should manually create it).

Now you should edit conf/sonar.properties file and add connection string to the database.

sonar.jdbc.username=sonar
sonar.jdbc.password=xxxxxxxxxx
sonar.jdbc.url=jdbc:jtds:sqlserver://localhost/Sonar;SelectMethod=Cursor;instance=sqlexpress

Place specific attention to database name in connection string because it is CASE SENSITIVE, as you can see I specified sqlserver://localhost/Sonar with capital S exactly as database name. Since you are using Accent Sensitive and Case Sensitive collation is super important that the name of the database is equal in casing to the name used in connection string. If you specify wrong casing, you are not able to start Sonar and you will find this error in the sonar log.

The error: The database name component of the object qualifier must be the name of the current database. Happens if you use wrong casing in Db Name in connection string

You should be able now to start Sonar without any problem.

Gian Maria

Fix of ChangeConnectionString resource in DSC Script to deploy Web Site

In the second part of this series I’ve received a really good comment by Rob Cannon, that warn me about an error in my ChangeConnectionString resource. In that article I told you that is ok for the Test part to return always False, so the Set Script is always run, because it is idempotent. This is true if you are using the Push Model, but if you are using the Pull Model instead, every 30 minutes the DSC will be applied and web config will be changed, so your application pool will be restarted. This is not a good situation, so I decided to change the script fixing the Test Part.

    Script ChangeConnectionString 
    {
        SetScript =
        {    
            $path = "C:\inetpub\dev\tailspintoys\Web.Config"
            $xml = Get-Content $path 

            $node = $xml.SelectSingleNode("//connectionStrings/add[@name='TailspinConnectionString']")
            $node.Attributes["connectionString"].Value = "Data Source=localhost;Initial Catalog=TailspinToys;User=sa;pwd=123abcABC;Max Pool Size=1000"
            $xml.Save($path)
        }
        TestScript = 
        {
            $path = "C:\inetpub\dev\tailspintoys\Web.Config"
            $xml = Get-Content $path 

            $node = $xml.SelectSingleNode("//connectionStrings/add[@name='TailspinConnectionString']")
            $cn = $node.Attributes["connectionString"].Value
            $stateMatched = $cn -eq "Data Source=localhost;Initial Catalog=TailspinToys;User=sa;pwd=123abcABC;Max Pool Size=1000"
            return $stateMatched
        }
        GetScript = 
        {
            return @{
                GetScript = $GetScript
                SetScript = $SetScript
                TestScript = $TestScript
                Result = false
            }
        } 
    }

The test part is really simple, it loads the xml file, verify if the connection string has the correct value and return true if the state was matched, false if the state was not matched. Running this new version of the script always runs the Set Part of ChangeConnectionString as before, nothing was changed. At a first time I though of a bug in the Test part, but after a moment I realized that the File resource actually overwrites the web config with the original one whenever the script runs because it was changed. This is how DSC is supposed to work, the file resource forces a Destination Directory to be equals to a source directory.This confirms me that the technique to download a base web.config with Node resource, and change it with a Script resource is suitable only for test server and if you use Push configuration. Actually to use Pull configuration the right web.config should be uploaded in the original location, so you do not need to change it after it was copied with the File Resource.If you are interested in a quick fix, the solution could be using two distinct file resources, the first one copies all needed files from the original location to a temp directory, then the ChangeConnectionString operates on web.config file present in this temp directory, finally another File Resource copies files from the temp directory to the real IIS directory.

 File TailspinSourceFilesShareToLocal
    {
        Ensure = "Present"  # You can also set Ensure to "Absent"
        Type = "Directory“ # Default is “File”
        Recurse = $true
        SourcePath = $AllNodes.SourceDir + "_PublishedWebsites\Tailspin.Web" # This is a path that has web files
        DestinationPath = "C:\temp\dev\tailspintoys" # The path where we want to ensure the web files are present
    }

    
    #now change web config connection string
    Script ChangeConnectionString 
    {
        SetScript =
        {    
            $path = "C:\temp\dev\tailspintoys\Web.Config"
            $xml = Get-Content $path 

            $node = $xml.SelectSingleNode("//connectionStrings/add[@name='TailspinConnectionString']")
            $node.Attributes["connectionString"].Value = "Data Source=localhost;Initial Catalog=TailspinToys;User=sa;pwd=123abcABC;Max Pool Size=1000"
            $xml.Save($path)
        }
        TestScript = 
        {
            $path = "C:\temp\dev\tailspintoys\Web.Config"
            $xml = Get-Content $path 

            $node = $xml.SelectSingleNode("//connectionStrings/add[@name='TailspinConnectionString']")
            $cn = $node.Attributes["connectionString"].Value
            $stateMatched = $cn -eq "Data Source=localhost;Initial Catalog=TailspinToys;User=sa;pwd=xxx;Max Pool Size=1000"
            return $stateMatched
        }
        GetScript = 
        {
            return @{
                GetScript = $GetScript
                SetScript = $SetScript
                TestScript = $TestScript
                Result = false
            }
        } 
    }
    
    
    File TailspinSourceFilesLocalToInetpub
    {
        Ensure = "Present"  # You can also set Ensure to "Absent"
        Type = "Directory“ # Default is “File”
        Recurse = $true
        SourcePath = "C:\temp\dev\tailspintoys" # This is a path that has web files
        DestinationPath = "C:\inetpub\dev\tailspintoys" # The path where we want to ensure the web files are present
    }

Now the ChangeConnectionString resource runs always, as we saw before, because each time that the File Resource runs it updates all the file with content of the original files. Changing this web.config file at each run is not a problem, because it is a temporary directory so not Worker Process Recycle happens. The final File Resource now works correctly and copies the files only if they are modified.This is what happens during the first run.

image

Figure 1:  During the first run all three resources were run, the first one copies files from the share to local temp, the second one changes web.config located in temp folder and finally the third one copies all files from temp folder to the folder monitored by IIS.If you run the configuration again without changing anything in the target node you got this result.

image

Figure 2: During second run, the first two resources are run, but the third one that actually copies file to the folder where the site resides was skipped, avoiding recycling the worker process.

The important aspect in previous picture is the third arrow, that highlight how the set part of the resource that copies files from temp directory to the local folder where IIS points is skipped, so no worker process recycle will happen. Thanks to this  simple change, now the script can be used even in a Pull process without too many changes.

Gian Maria.

Deploying Web Site With PowerShell DSC part 3

In this last part of this series I’ll explain how to deploy database projects output to local database of node machine. It was the most difficult due to some errors present in the xDatabase resource. Actually I have a couple of Database Projects in my solution, the first one define the structure of the database needed by my application while the second one reference the first and installs only some test data with a Post Deploy Script. You can read about this technique in my previous post Manage Test Data in Visual Studio Database Project Sadly enough, the xDatabase resource of DSC is still very rough and grumpy.

I’ve found two distinct problems:

The first one is that DatabaseName is used as key property of the resource, this means that it is not possible to run two different DacPac on the same database because of duplicate key violation. This is usually a no-problem, because I could have deployed only the project with test data and since it reference the dacpac with the real structure of the site, both of them should deploy correctly. Unfortunately this does not happens, because you need to add some additional parameters deploy method, and xDatabase resource still not supports DacDeployOptions class. The fix was trivial, I changed the resource to use the name of the DacPac file as the key and everything just works.

The second problem is more critical and derives from usage of the DacService.Register method inside the script. After the first successful deploy, all the subsequent ones gave me errors. If you got errors during Start-DscConfiguration the output of the cmdlet, even in verbose mode, does not gives you details of real error that happened to target node where the configuration was run.  Usually what you get is a message telling: These errors are logged to the ETW channel called
Microsoft-Windows-DSC/Operational. Refer to this channel for more details

It is time to have a look to Event Viewer of the nodes where the failure occurred. Errors are located in Application And Service Logs / Microsoft / Windows / Desired State Configuration. Here is how I found the real error that xDatabase is raising on the target node.

image

Figure 1: Errors in event viewer of Target Node.

The error is in the update, DacServices.Deploy failed to update the database because it was registered as a Data Tier application and the Deploy command does not update its registration accordingly. This problem was easy to solve, because I need only to Specify RegisterDataTierApplication with a DacDeploymentOptions. I’ve added even this fix to the original xDatabase resource and I’ve added also more logging, so you are able to verify, when dsc runs, what DacServices class is really doing.

If you like I’ve posted my fix at this address: http://1drv.ms/1osn09U but remember that my fix are not thoroughly tested, and are not official Microsoft Correction in any way. So feel free to use them at your own risk. Clearly all these error will be fixed when the final version of xDatabase will be released (I remember you that these resources are pre-release, and this is the reason why they are prefixed with an x).

Now that xDatabase Resource works good, I can define a couple of resources to deploy my two dacpacs to target database.

xDatabase DeployDac 
{ 
    Ensure = "Present" 
    SqlServer = "." 
    SqlServerVersion = "2012" 
    DatabaseName = "TailspinToys" 
    Credentials = (New-Object System.Management.Automation.PSCredential("sa", (ConvertTo-SecureString "xxxxx" -AsPlainText -Force)))
    DacPacPath =  $AllNodes.SourceDir + "Tailspin.Schema.DacPac" 
    DacPacApplicationName = "Tailspin"
} 
    

xDatabase DeployDacTestData
{ 
    Ensure = "Present" 
    SqlServer = "." 
    SqlServerVersion = "2012" 
    DatabaseName = "TailspinToys" 
    Credentials = (New-Object System.Management.Automation.PSCredential("sa", (ConvertTo-SecureString "xxxxx" -AsPlainText -Force)))
    DacPacPath =  $AllNodes.SourceDir + "Tailspin.SchemaAndTestData.DacPac" 
    DacPacApplicationName = "Tailspin"
} 

Shame on me, I’m using explicit UserName and password again in DSC scripts, but actually if I omit Credentials to use integrated security, the xDatabase script fails with a NullReferenceException. Since this is a test server I accept to use clear text password until the xDatabase resource will not be fixed to support integrated authentication.

Here is the link to the full DSC script: http://1drv.ms/1osoIYZ. Have fun with DSC.

Gian Maria.

How to Deploy Web Site with PowerShell DSC

I do not want to create another tutorial on DSC and I suggest you reading some introductory articles like: Introducing PowerShell Desired State Configuration before reading this article. Since I’m pretty new with PowerShell and I’m starting experimenting with DSC I decided to start creating a script to deploy my favorite test application (TailspinToys :) ) on a single Windows 2012 R2 server using only DSC. This post aims to share my thought on the subject.

I was able to complete the script, even if I encountered some difficulties and I manage to automate almost everything, except the installation of Sql Server 2012 (I’m working on it). The goal is being able to deploy an application that uses a SQL server database written in Asp.Net 4.5 to a Windows Server with a fresh install, using only DSC Goodness.

First of all I warn you that most of the resources I needed to deploy my site are not available in basic distribution of PoweShell and should be downloaded from Microsoft. To download all the resources in a single package there is a single page in MSDN to download the entire DSC Resource Kit.

After you downloaded the resource kit you should care about a couple of important points, the first one is that these resources are not production ready and they are all experimental. This is the reason why all these resources starts with an x. So do not expect any official program to support them, if you have problem you should ask people in the forum and you will found solution. The other aspect is: if you, like me, appreciate the push model, you need to install all of these modules to all target servers. This violates in a certain way my requirement of being able to install in a clean server, because the server is not really “clean” if you need to have DSC resources deployed on it. This problem will be mitigated with WMF 5.0 that introduces the concept of PowerShellGet to automatically discover, install and update Powershell Modules, so it is really a no-problem.

Once everything is in place, I started creating the script, the first part is the standard one you can find in every PowerShell DSC related article, plus some import instructions to import all the DSC resources I want to use in the package.

Configuration TailspinToys
{
   
  Import-DscResource -Module xWebAdministration
  Import-DscResource -Module xNetworking
  Import-DscResource -Module xSqlPs
  Import-DscResource -Module xDatabase
  #http://www.vexasoft.com/blogs/powershell/9561687-powershell-4-desired-state-configuration-enforce-ntfs-permissions
  Import-DscResource -Module NTFSPermission

  Node $AllNodes.NodeName 
  { 
    

    #Install the IIS Role 
    WindowsFeature IIS 
    { 
      Ensure = “Present” 
      Name = “Web-Server” 
    } 

    # Required for SQL Server 
    WindowsFeature installdotNet35 
    {             
        Ensure = "Present" 
        Name = "Net-Framework-Core" 
        Source = "\\neuromancer\Share\Sources_sxs\?Win2012R2" 
    } 

    #Install ASP.NET 4.5 
    WindowsFeature ASP 
    { 
      Ensure = “Present” 
      Name = “Web-Asp-Net45” 
    } 

In the beginning of the script the Import-DscResource allow me to import the various resources I’ve installed, and NTFS Permission resource is taken from an article on VexaSoft site; many thanks to the author for authoring this module. That article is really useful because it shows how easy is create a resource for DSC in the situation where there is nothing already pre-made to obtain your purpose.

I use a configuration resource and the special name $AllNodes will contain the name of the single server I want to use for the installation. The above part of the scripts takes care of all of the prerequisites of my TailspinToys application. I’m installing .NET 3.5 because it is needed for Sql Server installation, but sadly enough I was not able to make the xSqlServerInstall works, to automatically install Sql Server (Actually it asks me to reboot and even rebooting the DSC scripts stops to run). I’ve decided to install Sql Server manually and wait for a better and more stable version of xSqlServerInstall. Then I request IIS and asp.net 4.5.

Running the above script with the right configuration data produces a mof file that can be used to actually configure the target. Here is the configuration I’,m using.

$ConfigurationData = @{
    AllNodes = @(
        @{
            NodeName="WebTest2"
            SourceDir = "\\neuromancer\Drops\TailspinToys_CD_WebTest1\TailspinToys_CD_WebTest1_20140213.1\"
            PSDscAllowPlainTextPassword=$true
            RebootNodeIfNeeded = $true
         }
   )
}

I need the name of the server and the source directory where I stored the distribution of my WebSite. In this example I’m using a standard Drop Folder of a TFS Build, so I have my binaries indexed with my symbol server. The creation of the mof file is simply triggered calling the new defined function TailspinToys passing the configuration above..

TailspinToys -ConfigurationData $ConfigurationData 

Now I have a mof file that contains everything I need to create the deploy, and I can push configuration to desired nodes with:

Start-DscConfiguration -Path .\TailspinToys -Wait -Verbose

This will start configuration, connect to all the nodes (in this example the single machine WebTest2) and “make it so”, moving the state of the nodes to desired state. The cool part of DSC is that you specify the state you desire on the target nodes, without taking care on how this state will be achieved, this is done by the various resources. Another interesting aspect is, if a resource is already in desired state, the Start-DscConfiguration will do nothing. When you run the above script the first time it needs a little bit time, because it will install IIS, but if IIS is already installed in target node, nothing happens.

With few lines of PowerShell I was able to install IIS and Asp.NET 4.5 plus .NET 3.5 to my machines.

In the next article I’ll deal on how to deploy the website bits.

Gian Maria.