Create a Release with PowerShell, Zipped Artifacts and Chocolatey

In previous post I described how to create a simple PowerShell scripts that is capable of installing a software starting from a zipped file that contains the “release” of a software (produced by a build) and some installation parameters. Once you have this scenario up and running, releasing your software automatically is quite simple.

Once you automated the installation with a PowerShell script plus an archive file with Everything is needed to install the software, you are only a step away from Continuous Deployment.

A possible release vector is Chocolatey, a package manager for Windows based on NuGet. I’m not a Chocolatey expert, but at the very basic level, to create a chocolatey package you should only have a bunch of files and a PowerShell script that install the software.

The most basic form of a Chocolatey package is composed by files that contains artifacts to be installed and a PowerShell script that does the installation.

The very first step is creating a .nuspec file that will define what will be contained in Chocolatey package. Here is a possible content of release.nuspec file.

<package>
  <metadata>
  …
  </metadata>

   <files>
        <file src=”Tools\Setup\chocolateyInstall.ps1″ target=”Tools” />
        <file src=”Tools\Setup\ConfigurationManagerSetup.ps1″ target=”Tools” />
        <file src=”Tools\Setup\JarvisUtils.psm1″ target=”Tools” />
        <file src=”release\Jarvis.ConfigurationService.Host.zip” target=”Artifacts” />
  </files>

</package>

The real interesting part of this file is the <files> section, where I simply list all the files I want to be included in the package. ConfigurationManagerSetup.ps1 and JarvisUtils.psm1 are the original installation script I wrote, and Jarvis.ConfigurationService.Host.zip is the name of the artifacts generated by the PackRelease.ps1 script.

To support Chocolatey you only need to create another script, called chocolateyInstall.ps1 that will be called by chocolatey to install the software. This script is really simple, it needs to parse Chocolatey input parameters and invoke the original installation script to install the software.

Write-Output "Installing Jarvis.ConfigurationManager service. Script running from $PSScriptRoot"

$packageParameters = $env:chocolateyPackageParameters
Write-Output "Passed packageParameters: $packageParameters"

$arguments = @{}

# Script used to parse parameters is taken from https://github.com/chocolatey/chocolatey/wiki/How-To-Parse-PackageParameters-Argument

# Now we can use the $env:chocolateyPackageParameters inside the Chocolatey package
$packageParameters = $env:chocolateyPackageParameters

# Default the values
$installationRoot = "c:\jarvis\setup\ConfigurationManager\ConfigurationManagerHost"

# Now parse the packageParameters using good old regular expression
if ($packageParameters) {
    $match_pattern = "\/(?([a-zA-Z]+)):(?([`"'])?([a-zA-Z0-9- _\\:\.]+)([`"'])?)|\/(?([a-zA-Z]+))"
    $option_name = 'option'
    $value_name = 'value'

    Write-Output "Parameters found, parsing with regex";
    if ($packageParameters -match $match_pattern )
    {
       $results = $packageParameters | Select-String $match_pattern -AllMatches
       $results.matches | % {
            
            $arguments.Add(
                $_.Groups[$option_name].Value.Trim(),
                $_.Groups[$value_name].Value.Trim())
       }
    }
    else
    {
        Throw "Package Parameters were found but were invalid (REGEX Failure)"
    }

    if ($arguments.ContainsKey("installationRoot")) {

        $installationRoot = $arguments["installationRoot"]
        Write-Output "installationRoot Argument Found: $installationRoot"
    }
} else 
{
    Write-Output "No Package Parameters Passed in"
}

Write-Output "Installing ConfigurationManager in folder $installationRoot"

$artifactFile = "$PSScriptRoot\..\Artifacts\Jarvis.ConfigurationService.Host.zip"

if(!(Test-Path -Path $artifactFile ))
{
     Throw "Unable to find package file $artifactFile"
}

Write-Output "Installing from artifacts: $artifactFile"

if(!(Test-Path -Path "$PSScriptRoot\ConfigurationManagerSetup.ps1" ))
{
     Throw "Unable to find package file $PSScriptRoot\ConfigurationManagerSetup.ps1"
}


if(-not(Get-Module -name jarvisUtils)) 
{
    Import-Module -Name "$PSScriptRoot\jarvisUtils"
}

&amp; $PSScriptRoot\ConfigurationManagerSetup.ps1 -deployFileName $artifactFile -installationRoot $installationRoot

This script does almost nothing, it delegates everything to the ConfigurationManagerSetup.ps1 file. Once the .nuspec file and this script are writenn, you can use nuget.exe to manually generate a package and verify installing in a target system.

When everything is ok and you are able to create and install a chocolatey package locally, you can schedule Chocolatey package creation with a TFS Build. The advantage is automatic publishing and SemVer version management done with Gitversion.exe.

image 

Figure 1: Task to generate Chocolatey Package.

To generate Chocolatey Package you can use the standard NuGet Packager task, because Chocolatey uses the very same technology as NuGet. As you can see from Figure 1 the task is really simple and needs very few parameters to work. If you like more detail I’ve blogged in the past about using this task to publish NuGet packages.

Publish a Nuget Package to NuGet/MyGet with VSO Build

This technique is perfectly valid for VSTS or TFS 2015 on-premises. Once the build is finished, and the package is published, you should check on NuGet or MyGet if the package is ok.

image

Figure 2: Package listing on MyGet, I can verify all published versions

Now you can install with this command

choco install Jarvis.ConfigurationService.Host
 	-source 'https://www.myget.org/F/jarvis/api/v2'
	-packageParameters "/installationRoot:C:\Program files\Proximo\ConfigurationManager" 
   -force 

All parameters needed by the installation scripts should be specified with the –packageParameters command line options. It is duty of chocolateyInstall.ps1 parsing this string and pass all parameters to the original installation script.

If you want to install a prerelease version (the ones with a suffix) you need to specify the exact version and use the –pre parameter.

A working example of scripts and nuspec file can be found in ConfigurationManager project.

Gian Maria.

Using PowerShell scripts to deploy your software

I often use PowerShell scripts to package a “release” of a software during a build because it gives me a lots of flexibility.

Different approaches for publishing Artifacts in build vNext

The advantage of using PowerShell is complete control over what will be included in the “release” package. This allows you to manipulate configuration files, remove unnecessary files, copy files from somewhere else in the repository, etc etc.

The aim of using PowerShell in a build is creating a single archive that contains everything needed to deploy a new release

This is the first paradigm, all the artifacts needed to install the software should be included in one or more zip archives. Once you have this, the only step that separates you from Continuous Deployment is creating another scripts that is capable of using that archive to install the software in the current system. This is the typical minimum set of parameters of such a script.

param(
    [string] $deployFileName,
    [string] $installationRoot
)

This scripts the name of the package file and the path where the software should be installed. More complex program accepts other configuration parameteres, but this is the simplest situation for a simple software that runs as a service in Windows and needs no configuration. The script does some preparation and then start the real installing phase.

$service = Get-Service -Name "Jarvis - Configuration Service" -ErrorAction SilentlyContinue 
if ($service -ne $null) 
{
    Stop-Service "Jarvis - Configuration Service"
    $service.WaitForStatus("Stopped")
}

Write-Output 'Deleting actual directory'

if (Test-Path $installationRoot) 
{
    Remove-Item $installationRoot -Recurse -Force
}

Write-Output "Unzipping setup file"
Expand-WithFramework -zipFile $file.FullName -destinationFolder $installationRoot

if ($service -eq $null) 
{
    Write-Output "Starting the service in $finalInstallDir\Jarvis.ConfigurationService.Host.exe"

    &amp; "$installationRoot\Jarvis.ConfigurationService.Host.exe" install
} 

First step is obtaining a reference to a Windows service called “Jarvis – Configuration Service”, if the service is present the script stops it and wait for it to be really stopped. Once the service is stopped it deletes current directory, and then, extract all the files contained in the zipped archive to the same directory. If the service was not present (it is the very first installation) it invokes the executable with the install option (we are using TopShelf).

The goal  of the script is being able to work for a first time installation, as well of subsequent installations.

A couple of aspect are interesting in this approach: first, the software does not have any specific configuration in the installation directory, when it is time to update, the script delete everything, and then copy the new version on the same path. Second, the script is made to work if the service was already installed, or if this is the very first installation.

Since this simple software indeed uses some configuration in the app.config file, it is duty of the scripts to reconfigure the script after the deploy.

$configFileName = $installationRoot + "\Jarvis.ConfigurationService.Host.exe.config"
$xml = (Get-Content $configFileName)
 
Edit-XmlNodes $xml -xpath "/configuration/appSettings/add[@key='uri']/@value" -value "http://localhost:55555"
Edit-XmlNodes $xml -xpath "/configuration/appSettings/add[@key='baseConfigDirectory']/@value" -value "..\ConfigurationStore"

$xml.save($configFileName)

This snippet of code uses an helper function to change the configuration file with XPath. Here there is another assumption: no-one should change configuration file, because it will be overwritten on the next installation. All the configurations that are contained in configuration file should be passed to the installation script.

The installation script should accept any configuration that need to be store in application configuration file. This is needed to allow for a full overwrite approac, where the script deletes all previous file and overwrite them with the new version.

In this example we change the port the service is listening and change the directory where this service will store configuration files (it is a configuration file manager). The configuration store is set to ..\ConfigurationStore, a folder outside the installation directory. This will preserve content of that folder on the next setup.

To simplify updates, you should ensure that it is safe to delete old installation folder and overwrite with a new one during upgrade. No configuration or no modification must be necessary in files that are part of the installation.

The script uses hardcoded values: port 55555 and ..\ConfigurationStore folder, if you prefer, you can pass these values as parameter of the installation script. The key aspect here is: Every configuration file that needs to be manipulated and parametrized after installation, should be placed in other directory. We always ensure that installation folder can be deleted and recreated by the script.

This assumption is strong, but avoid complication of  installation scripts, where the script needs to merge default settings of the new version of the software with the old settings of previous installation. For this reason, the Configuration Service uses a folder outside the standard installation to store configuration.

All scripts can be found in Jarvis Configuration Service project.

Gian Maria.

Avoid using Shell command in PowerShell scipts

I have setup scripts that are used to install software, they are simply based on this paradigm

The build produces a zip file that contains everything needed to install the software, then we have a script that accepts the zip file as parameter as well as some other parameters and does install sofwtare on a local machine

This simple paradigm is perfect, because we can manually install a software launching powershell, or we can create a Chocolatey package to automate the installation. Clearly you can use the very same script to release the software inside TFS Release Management.

When PowerShell script runs in the RM Agent (or in a build agent) it has no access to the Shell. This is usually the default, because the agents does not run interactively and usually in Release Management, PowerShell scripts are executed remotely. This fact imply that you should not use anything related to shell in your script. Unfortunately, if you looks in the internet for code to unzip a zip file with PowerShell you can find code that uses shell.application object:

function Expand-WithShell(
    [string] $zipFile,
    [string] $destinationFolder,
    [bool] $deleteOld = $true,
    [bool] $quietMode = $false) 
{
    $shell = new-object -com shell.application

    if ((Test-Path $destinationFolder) -and $deleteOld)
    {
          Remove-Item $destinationFolder -Recurse -Force
    }

    New-Item $destinationFolder -ItemType directory

    $zip = $shell.NameSpace($zipFile)
    foreach($item in $zip.items())
    {
        if (!$quietMode) { Write-Host "unzipping " + $item.Name }
        $shell.Namespace($destinationFolder).copyhere($item)
    }
}

This is absolutely a problem if you run a script that uses this function in a build or RM agent. A real better alternative is a funcion that uses classes from Framework.

function Expand-WithFramework(
    [string] $zipFile,
    [string] $destinationFolder,
    [bool] $deleteOld = $true
)
{
    Add-Type -AssemblyName System.IO.Compression.FileSystem
    if ((Test-Path $destinationFolder) -and $deleteOld)
    {
          Remove-Item $destinationFolder -Recurse -Force
    }
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destinationFolder)
}

With this function you are not using anything related to Shell, and it can be run without problem even during a build or a release management. As you can see the function is simple and probably is a better solution even for interactive run of the script.

Another solution is using 7zip from command line, it gives you a better compression, it is free, but you need to install into any server where you want to run the script. This imply that probably zip files are still the simplest way to package your build artifacts for deployment.

Gian Maria.

Fix of ChangeConnectionString resource in DSC Script to deploy Web Site

In the second part of this series I’ve received a really good comment by Rob Cannon, that warn me about an error in my ChangeConnectionString resource. In that article I told you that is ok for the Test part to return always False, so the Set Script is always run, because it is idempotent. This is true if you are using the Push Model, but if you are using the Pull Model instead, every 30 minutes the DSC will be applied and web config will be changed, so your application pool will be restarted. This is not a good situation, so I decided to change the script fixing the Test Part.

    Script ChangeConnectionString 
    {
        SetScript =
        {    
            $path = "C:\inetpub\dev\tailspintoys\Web.Config"
            $xml = Get-Content $path 

            $node = $xml.SelectSingleNode("//connectionStrings/add[@name='TailspinConnectionString']")
            $node.Attributes["connectionString"].Value = "Data Source=localhost;Initial Catalog=TailspinToys;User=sa;pwd=123abcABC;Max Pool Size=1000"
            $xml.Save($path)
        }
        TestScript = 
        {
            $path = "C:\inetpub\dev\tailspintoys\Web.Config"
            $xml = Get-Content $path 

            $node = $xml.SelectSingleNode("//connectionStrings/add[@name='TailspinConnectionString']")
            $cn = $node.Attributes["connectionString"].Value
            $stateMatched = $cn -eq "Data Source=localhost;Initial Catalog=TailspinToys;User=sa;pwd=123abcABC;Max Pool Size=1000"
            return $stateMatched
        }
        GetScript = 
        {
            return @{
                GetScript = $GetScript
                SetScript = $SetScript
                TestScript = $TestScript
                Result = false
            }
        } 
    }

The test part is really simple, it loads the xml file, verify if the connection string has the correct value and return true if the state was matched, false if the state was not matched. Running this new version of the script always runs the Set Part of ChangeConnectionString as before, nothing was changed. At a first time I though of a bug in the Test part, but after a moment I realized that the File resource actually overwrites the web config with the original one whenever the script runs because it was changed. This is how DSC is supposed to work, the file resource forces a Destination Directory to be equals to a source directory.This confirms me that the technique to download a base web.config with Node resource, and change it with a Script resource is suitable only for test server and if you use Push configuration. Actually to use Pull configuration the right web.config should be uploaded in the original location, so you do not need to change it after it was copied with the File Resource.If you are interested in a quick fix, the solution could be using two distinct file resources, the first one copies all needed files from the original location to a temp directory, then the ChangeConnectionString operates on web.config file present in this temp directory, finally another File Resource copies files from the temp directory to the real IIS directory.

 File TailspinSourceFilesShareToLocal
    {
        Ensure = "Present"  # You can also set Ensure to "Absent"
        Type = "Directory“ # Default is “File”
        Recurse = $true
        SourcePath = $AllNodes.SourceDir + "_PublishedWebsites\Tailspin.Web" # This is a path that has web files
        DestinationPath = "C:\temp\dev\tailspintoys" # The path where we want to ensure the web files are present
    }

    
    #now change web config connection string
    Script ChangeConnectionString 
    {
        SetScript =
        {    
            $path = "C:\temp\dev\tailspintoys\Web.Config"
            $xml = Get-Content $path 

            $node = $xml.SelectSingleNode("//connectionStrings/add[@name='TailspinConnectionString']")
            $node.Attributes["connectionString"].Value = "Data Source=localhost;Initial Catalog=TailspinToys;User=sa;pwd=123abcABC;Max Pool Size=1000"
            $xml.Save($path)
        }
        TestScript = 
        {
            $path = "C:\temp\dev\tailspintoys\Web.Config"
            $xml = Get-Content $path 

            $node = $xml.SelectSingleNode("//connectionStrings/add[@name='TailspinConnectionString']")
            $cn = $node.Attributes["connectionString"].Value
            $stateMatched = $cn -eq "Data Source=localhost;Initial Catalog=TailspinToys;User=sa;pwd=xxx;Max Pool Size=1000"
            return $stateMatched
        }
        GetScript = 
        {
            return @{
                GetScript = $GetScript
                SetScript = $SetScript
                TestScript = $TestScript
                Result = false
            }
        } 
    }
    
    
    File TailspinSourceFilesLocalToInetpub
    {
        Ensure = "Present"  # You can also set Ensure to "Absent"
        Type = "Directory“ # Default is “File”
        Recurse = $true
        SourcePath = "C:\temp\dev\tailspintoys" # This is a path that has web files
        DestinationPath = "C:\inetpub\dev\tailspintoys" # The path where we want to ensure the web files are present
    }

Now the ChangeConnectionString resource runs always, as we saw before, because each time that the File Resource runs it updates all the file with content of the original files. Changing this web.config file at each run is not a problem, because it is a temporary directory so not Worker Process Recycle happens. The final File Resource now works correctly and copies the files only if they are modified.This is what happens during the first run.

image

Figure 1:  During the first run all three resources were run, the first one copies files from the share to local temp, the second one changes web.config located in temp folder and finally the third one copies all files from temp folder to the folder monitored by IIS.If you run the configuration again without changing anything in the target node you got this result.

image

Figure 2: During second run, the first two resources are run, but the third one that actually copies file to the folder where the site resides was skipped, avoiding recycling the worker process.

The important aspect in previous picture is the third arrow, that highlight how the set part of the resource that copies files from temp directory to the local folder where IIS points is skipped, so no worker process recycle will happen. Thanks to this  simple change, now the script can be used even in a Pull process without too many changes.

Gian Maria.

Deploying Web Site With PowerShell DSC part 3

In this last part of this series I’ll explain how to deploy database projects output to local database of node machine. It was the most difficult due to some errors present in the xDatabase resource. Actually I have a couple of Database Projects in my solution, the first one define the structure of the database needed by my application while the second one reference the first and installs only some test data with a Post Deploy Script. You can read about this technique in my previous post Manage Test Data in Visual Studio Database Project Sadly enough, the xDatabase resource of DSC is still very rough and grumpy.

I’ve found two distinct problems:

The first one is that DatabaseName is used as key property of the resource, this means that it is not possible to run two different DacPac on the same database because of duplicate key violation. This is usually a no-problem, because I could have deployed only the project with test data and since it reference the dacpac with the real structure of the site, both of them should deploy correctly. Unfortunately this does not happens, because you need to add some additional parameters deploy method, and xDatabase resource still not supports DacDeployOptions class. The fix was trivial, I changed the resource to use the name of the DacPac file as the key and everything just works.

The second problem is more critical and derives from usage of the DacService.Register method inside the script. After the first successful deploy, all the subsequent ones gave me errors. If you got errors during Start-DscConfiguration the output of the cmdlet, even in verbose mode, does not gives you details of real error that happened to target node where the configuration was run.  Usually what you get is a message telling: These errors are logged to the ETW channel called
Microsoft-Windows-DSC/Operational. Refer to this channel for more details

It is time to have a look to Event Viewer of the nodes where the failure occurred. Errors are located in Application And Service Logs / Microsoft / Windows / Desired State Configuration. Here is how I found the real error that xDatabase is raising on the target node.

image

Figure 1: Errors in event viewer of Target Node.

The error is in the update, DacServices.Deploy failed to update the database because it was registered as a Data Tier application and the Deploy command does not update its registration accordingly. This problem was easy to solve, because I need only to Specify RegisterDataTierApplication with a DacDeploymentOptions. I’ve added even this fix to the original xDatabase resource and I’ve added also more logging, so you are able to verify, when dsc runs, what DacServices class is really doing.

If you like I’ve posted my fix at this address: http://1drv.ms/1osn09U but remember that my fix are not thoroughly tested, and are not official Microsoft Correction in any way. So feel free to use them at your own risk. Clearly all these error will be fixed when the final version of xDatabase will be released (I remember you that these resources are pre-release, and this is the reason why they are prefixed with an x).

Now that xDatabase Resource works good, I can define a couple of resources to deploy my two dacpacs to target database.

xDatabase DeployDac 
{ 
    Ensure = "Present" 
    SqlServer = "." 
    SqlServerVersion = "2012" 
    DatabaseName = "TailspinToys" 
    Credentials = (New-Object System.Management.Automation.PSCredential("sa", (ConvertTo-SecureString "xxxxx" -AsPlainText -Force)))
    DacPacPath =  $AllNodes.SourceDir + "Tailspin.Schema.DacPac" 
    DacPacApplicationName = "Tailspin"
} 
    

xDatabase DeployDacTestData
{ 
    Ensure = "Present" 
    SqlServer = "." 
    SqlServerVersion = "2012" 
    DatabaseName = "TailspinToys" 
    Credentials = (New-Object System.Management.Automation.PSCredential("sa", (ConvertTo-SecureString "xxxxx" -AsPlainText -Force)))
    DacPacPath =  $AllNodes.SourceDir + "Tailspin.SchemaAndTestData.DacPac" 
    DacPacApplicationName = "Tailspin"
} 

Shame on me, I’m using explicit UserName and password again in DSC scripts, but actually if I omit Credentials to use integrated security, the xDatabase script fails with a NullReferenceException. Since this is a test server I accept to use clear text password until the xDatabase resource will not be fixed to support integrated authentication.

Here is the link to the full DSC script: http://1drv.ms/1osoIYZ. Have fun with DSC.

Gian Maria.