Avoid using Shell command in PowerShell scipts

I have setup scripts that are used to install software, they are simply based on this paradigm

The build produces a zip file that contains everything needed to install the software, then we have a script that accepts the zip file as parameter as well as some other parameters and does install sofwtare on a local machine

This simple paradigm is perfect, because we can manually install a software launching powershell, or we can create a Chocolatey package to automate the installation. Clearly you can use the very same script to release the software inside TFS Release Management.

When PowerShell script runs in the RM Agent (or in a build agent) it has no access to the Shell. This is usually the default, because the agents does not run interactively and usually in Release Management, PowerShell scripts are executed remotely. This fact imply that you should not use anything related to shell in your script. Unfortunately, if you looks in the internet for code to unzip a zip file with PowerShell you can find code that uses shell.application object:

function Expand-WithShell(
    [string] $zipFile,
    [string] $destinationFolder,
    [bool] $deleteOld = $true,
    [bool] $quietMode = $false) 
{
    $shell = new-object -com shell.application

    if ((Test-Path $destinationFolder) -and $deleteOld)
    {
          Remove-Item $destinationFolder -Recurse -Force
    }

    New-Item $destinationFolder -ItemType directory

    $zip = $shell.NameSpace($zipFile)
    foreach($item in $zip.items())
    {
        if (!$quietMode) { Write-Host "unzipping " + $item.Name }
        $shell.Namespace($destinationFolder).copyhere($item)
    }
}

This is absolutely a problem if you run a script that uses this function in a build or RM agent. A real better alternative is a funcion that uses classes from Framework.

function Expand-WithFramework(
    [string] $zipFile,
    [string] $destinationFolder,
    [bool] $deleteOld = $true
)
{
    Add-Type -AssemblyName System.IO.Compression.FileSystem
    if ((Test-Path $destinationFolder) -and $deleteOld)
    {
          Remove-Item $destinationFolder -Recurse -Force
    }
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destinationFolder)
}

With this function you are not using anything related to Shell, and it can be run without problem even during a build or a release management. As you can see the function is simple and probably is a better solution even for interactive run of the script.

Another solution is using 7zip from command line, it gives you a better compression, it is free, but you need to install into any server where you want to run the script. This imply that probably zip files are still the simplest way to package your build artifacts for deployment.

Gian Maria.

Deploying a click-once application with build vNext in Azure Blob Storage

Thanks to the new build system in TFS / VSTS, publishing an application with Click-once during a build is really simple.

Versioning the click-once app

The project is using Git and GitFlow, thus it comes natural to use GitVersion (as described in a previous post) to automatically add Semantic Versioning. In previous post I’ve demonstrated how to use this technique to publish Nuget Packages and nothing changes for Click Once applications.

Thanks to GitVersion and GitFlow I do not need to care about version management of my project.

The only difference is in how numbering is created for ClickOnce applications. Click-once does not support suffix like –unstable or –beta as Nuget can support for pre-release package. To have a good Major.Minor.Path.Suffix number the solution is splitting the prerelease tag (ex –unstable.3) to grab only the last number part. The result is: instead of having  version 1.2.5-unstable7 (perfectly valid for nuget but not for click-once) the build will use version 1.2.5.7, perfectly valid for ClickOnce.

The whole operation is simply managed by powershell script.

$splitted = $preReleaseTag.Split('.')
$preReleaseNum = $splitted[$splitted.Length - 1]
if ([string]::IsNullOrEmpty($preReleaseNum))
{
    $preReleaseNum = "0"
}
$clickOnceVersion = $version.MajorMinorPatch + "." + $preReleaseNum

Building click-once package

In MSDN official documentation there is a nice page that explain how to publish click-once application from command line with MSBuild. This is an exceptional starting point, because parameters can be easily passed to MSBuild during vNext build thus we need no customization to create click-once package.

Publishing with MSBuild command line interface makes easy to integrate in vNext Build.

First step is declaring a bunch of Build Properties in the build, to store all needed values during the build.

image

Figure 1: Variables to add for vNext build

The build needs three variables: $(ClickOnceVersion) will contain the version generated by with GitVersion, $(AzureBlobUrl) will contain the address of Azure Blob Url where the application will be published. Finally $(AzureBlobPrefix) is the prefix (folder) to categorize publishing, basically is the subfolder in the Azure Blob where the files will be copied.

Powershell script does some sting manipulation to detect whitch branch is about to be published. Thanks to GitVersion in $preReleaseTag variable the build can know the logical branch that is about to be compiled (ex: unstable.1). If it is empty the build is building the stable branch, if it is not null a simple split by dot charachter gives the logical branch: unstable, beta, etc. Using GitVersion preReleaseTag is better than checking branch name, because it better explain the role of current branch in GitFlow.

Knowing the logical branch (stable, unstable, beta) allows the build to publish different version of the product (stable, beta, unstable) on different subdirectory of azure blob.

This technique allows people to choose the distribution they want, similar to what Chrome does.

$preReleaseString = $splitted[0]
if ([string]::IsNullOrEmpty($preReleaseString))
{
    $preReleaseString = "stable"
    $ProductName = "MDbGui.Net"
}
else
{
    $ProductName = "MDbGui.Net-$preReleaseString"
}

Now all variables are ready, it is time to specify MSBuild Arguments to the task that build the solution and this is the only modification needed to trigger publishing the Click-once package. These are the command line options needed to publish the click-once package from MSBuild

/target:publish 
/p:ApplicationVersion=$(ClickOnceVersion) 
/p:PublishURL=$(AzureBlobUrl)/$(AzureBlobPrefix)/ 
/p:UpdateEnabled=true 
/p:UpdateMode=Foreground 
/p:ProductName=$(ProductName) 

Handling artifacts

Previous MSBuild command line produces all files needed for click-once publishing inside a subdirectory called app.publish inside the output directory. To have a clean and nice build, it is better to copy all relevant file the file inside staging directory.

This procedure was fully described in a previous post that deal with managing artifacts in a vNext Build. I always prefer including a script to copy all files that needs to be published as result for the build inside staging directory instead of relying on publishing every file inside bin folders.

image

Figure 2: Call a PowerShell script to copy all relevant files on Staging Directory

At the end of this step the all click-once related files are placed inside a subfolder of staging directory. The last step is moving those files on a web site or azure blob storage. ClickOnce apps needs only to be placed on accessible site, an instance of IIS is perfectly valid, but cost of Azure Blob Storage is really low and there are already a nice Build vNext task to copy file on it, so it would be stupid not to take advantage of it.

Move everything on Azure Blob

The easiest solution to copy all click-once related file on a public accessible location is creating a public Azure Blob storage, because you already have an action in build vNext to accomplish this task.

image

Figure 3: Azure Blob File copy task can copy files easily in target Blob Storage

To connect an Azure Account with Azure Endpoint please consult my previous post on the argument: Use the right Azure Service Endpoint in build vNext to avoid doing my mistake of trying to access a classic Blob Storage with Azure ARM.

As you can see from Figure 3 I specified the name of the storage acount, the container name and a prefix that is used to publish different branches for the application.

At the end of the build you have your application published on a url like this one: https://portalvhdsdlrhmlyhqzrk1.blob.core.windows.net/mdbgui/unstable/MDbGui.Net.application

Conclusion

All the scripts used to create this articles are in project https://github.com/stefanocastriotta/MDbGui.Net/tree/develop/Tools/BuildScripts you can download them and use in your project if you want. Thanks to the ability to change build number here is the result

image

Figure 4: you can easily find the version number in build number.

Gian Maria.

Build vNext, support for deploying bits to Windows machines

One of the most interesting trend of DevOps movement is continuous deployment using build machines. Once you get your continuous build up and running, the next step is customizing the build to deploy on one or more test environments. If you do not need to deploy in production, there is no need of a controlled release pipeline (Ex: Release Management) and using a simple build is the most productive choiche. In this scenario one of the biggest pain is moving bits from the build machine to target machines. Once build output is moved to a machine, installing bits is usually only a matter of using some PowerShell script.

In Build, Deploy, Test scenario, quite often copying build output to target machine is the most difficult part

Thanks to Build vNext solving this problem is super easy. If you go to your visualstudio.com account, and choose the TEST hub, you can notice a submenu called machines.

image

Figure 1: Machines functionality in Visualstudio.com

This new menu is related to a new feature, still in preview, used to define groups of machine that can be use for deploy and testing workflows. In Figure 1 you notice a group called Cyberpunk1. Creating a group is super-easy, you just need to give it a name, specify administrative credentials and the list of the machines that will compose the group. You can also use different credentials for each machine, but using machine in Active Directory domain is usually the simplest scenario.

image

Figure 1: Editing of machine groups

Actually this feature still does not support Azure Virtual Machines, but you can easily target machine in your on-premise infrastructure. You just need to be sure that

  • All machines are reachable from the machine where the build agent is running
  • Check your DNS to verify that the names resolution is ok
  • All target machines should have Powershell remoting enabled
  • All target machines should have sharing of file system enabled
  • Firewall port are opened.

I’ve tested with an environment where both machines are running Windows Server 2012 R2, with latest update and file sharing enabled. Once you defined a machine group, you can use it to automatically copy files from build agent to all machines with a simple task of build vNext.

SNAGHTML3b78ea

Figure 3: Windows machine File Copy task

Thanks to this simple task you can simply copy files from build machine to destination machine, without the need to install any agent or other components. All you need to do is choose the machine group, target folder and source folder.

If you get error running the build, a nice new feature of build vNext is the ability to download full log as zip, where all the logs are separated by tasks

image

Figure 4: All build logs are separated for each step, to simplify troubleshooting

Opening the file 4_Copy … you can read logs related to the copy build step, to understand why the step is failing. Here is what I find in one of my build that failed.

Failed to connect to the path \\vsotest.cyberpunk.local with the user administrator@cyberpunk.local for copying.System error 53 has occurred. 
 2015-06-10T08:35:02.5591803Z The network path was not found.

In this specific situation the RoboCopy tool is complaining that the network path was not found, because I forgot to enable file sharing on the target machine. Once I enabled file sharing an running again the build everything was green, and I can verify that all files were correctly copied on target machines.

As a general rule, whenever a build fails, download all log and inspect the specific log for the task that failed.

image

Figure 5: Sample application was correctly copied to target machines.

In my first sample, I’ve used TailspinToys sample application, I’ve configured MsBuild to use StagingDirectory as output folder with parameter OutDir: /p:OutDir=$(build.stagingDirectory) and thanks to Windows Machine File Copy task all build output is automatically copied on target machines.

Once you got your build output copied on target machine, you need only to create a script to install the new bits, and maybe some integration test to verify that the application is in healthy state.

Gian Maria

Build vNext, distributing load to different agents

One of the major benefit of the new build infrastructure of TFS and Visual Studio Online is the easy deployment of build agents. The downside of this approach is that your infrastructure become full of agents, and you should have some way to determine which agent(s) to use for a specific build. The problem is:

avoid running builds in machines that are “not appropriate” for that build.

Running on a specific agent

If you are customizing a build, or if you are interested in running the build on a specific agent in a specific machine (ex: local agent), the solution is super easy, Just edit build definition and in General tab add a demand named Agent.Name with the value of the name of the specified Agent.

image

Figure 1: Adding a demand for a specific agent

If the agent is not available, you are warned when you try to queue the build.

image

Figure 2: Warning on build queuing when there are agents problem

In this situation the system is warning me that there are agents compatible to the build, but they are offline. You can queue the build and it will be executed when the Agent will be online again, or you can press cancel and understand why the agent is not online.

If you want to run a build on a specific agent, just add Agent.Name demand on the build.

Specifying demands

The previous example is interesting because it is using Build Demands against properties of the agent. If you navigate on build agent admin page, for each agent you are able to see all associated properties.

image

Figure 3: Agent capabilities

On top there are User Capabilities that are editable. They are used to give custom capabilities to your agents. In bottom part there are System Capabilities, automatically determined by the agent itself and thus cannot be changed. If you examine all these capabilities you can see interesting capabilities such as VisualStudio that tells you if the agent is installed on a machine where Visual Studio is installed, other capabilities are also used to verify exact version of Visual Studio.

This is really important, because if you examine demands for a build you can verify that some of them are already placed in the build definition and they could not be removed.

image

Figure 4: some of the demands are read-only, you can add your demand

If you wonder why the build has some predefined, readonly demands, the answer is: they are taken from the build definition.

image

Figure 5: Build definition

If you look at the build definition, it contains a Visual Studio Build task, and it is this block that automatically adds the visualstudio demands on the build. This explain why that demand cannot be removed, it is required from a Build Step. You can try to remove Visual Studio Test task, and you can verify that the vstest demands is also gone.

Each Build Task automatically adds demands to be sure to run in compatible agents.

In my example I have some agents deployed on my office; they are used to run tests on some machines, and I use them to run build definitions composed only by build and test tasks, but no publish build artifacts. The reason is: I have really low upload bandwidth on my office, but I have fast machine on really fast SSD and tons of RAM so they are able to quickly build and test my projects.

Then I have some build used to Publish Artifacts, and I want to be sure that those build are not executed on agents in my office or they will saturate my upload Bandwidth. To avoid this problem I simply add uploader demands on these builds, and manually add this capability to agents deployed on machine that have no problem with Upload Bandwidth, Ex on agents deployed on Azure Virtual Machines.

You can use custom Demands to be sure that a build runs on agents with specific capabilities.

Using Agent pools to separate build environments

The final solution to subdivide works to your agent is using agent pools. A pool is similar to the old concept of controller, each build is associated to a default pool, and it can be scheduled only on agents bounded to that pool. Using different pools can be useful if you have really different building environments, and you want to have a strong separation between them.

A possible example is, if you have agents deployed on fast machines with fast SSD or RAMDisk to speedup build and testing, you can create a dedicated pool with a name such as FastPool. Once the pool is created you can schedule high priority build on that pool to be sure that the build will be completed in the least amount of time. You can further subdivide agents in that pool using capabilities, such as: SSD, RAMDISK, etc.

You can also create a pool called “Priority” to execute build with high priority and being sure that some slow build does not slow down high priority build and so on. If you have two different offices located very far away you can have a different pool for each office, to be sure that some builds are executed in local network for that office and so on.

With Agent Pools you can have a strong separation between your build Environment.

To conclude this post, you should use Agent Pools if you want to achieve strong separation and you are creating distinct Build Environment that shares some common strong characteristic (phisical location, machine speed, priority, etc). Inside a poll you can further subdivide work among agents using custom demands.

As final notice, as of today, with the latest update of VSO, the new build engine is not anymore in preview, the tab has no Asterisk anymore and the keyword PREVIEW is gone away :), so vNext in VSO reached GA.

image

 

Gian Maria.

Fix of ChangeConnectionString resource in DSC Script to deploy Web Site

In the second part of this series I’ve received a really good comment by Rob Cannon, that warn me about an error in my ChangeConnectionString resource. In that article I told you that is ok for the Test part to return always False, so the Set Script is always run, because it is idempotent. This is true if you are using the Push Model, but if you are using the Pull Model instead, every 30 minutes the DSC will be applied and web config will be changed, so your application pool will be restarted. This is not a good situation, so I decided to change the script fixing the Test Part.

    Script ChangeConnectionString 
    {
        SetScript =
        {    
            $path = "C:\inetpub\dev\tailspintoys\Web.Config"
            $xml = Get-Content $path 

            $node = $xml.SelectSingleNode("//connectionStrings/add[@name='TailspinConnectionString']")
            $node.Attributes["connectionString"].Value = "Data Source=localhost;Initial Catalog=TailspinToys;User=sa;pwd=123abcABC;Max Pool Size=1000"
            $xml.Save($path)
        }
        TestScript = 
        {
            $path = "C:\inetpub\dev\tailspintoys\Web.Config"
            $xml = Get-Content $path 

            $node = $xml.SelectSingleNode("//connectionStrings/add[@name='TailspinConnectionString']")
            $cn = $node.Attributes["connectionString"].Value
            $stateMatched = $cn -eq "Data Source=localhost;Initial Catalog=TailspinToys;User=sa;pwd=123abcABC;Max Pool Size=1000"
            return $stateMatched
        }
        GetScript = 
        {
            return @{
                GetScript = $GetScript
                SetScript = $SetScript
                TestScript = $TestScript
                Result = false
            }
        } 
    }

The test part is really simple, it loads the xml file, verify if the connection string has the correct value and return true if the state was matched, false if the state was not matched. Running this new version of the script always runs the Set Part of ChangeConnectionString as before, nothing was changed. At a first time I though of a bug in the Test part, but after a moment I realized that the File resource actually overwrites the web config with the original one whenever the script runs because it was changed. This is how DSC is supposed to work, the file resource forces a Destination Directory to be equals to a source directory.This confirms me that the technique to download a base web.config with Node resource, and change it with a Script resource is suitable only for test server and if you use Push configuration. Actually to use Pull configuration the right web.config should be uploaded in the original location, so you do not need to change it after it was copied with the File Resource.If you are interested in a quick fix, the solution could be using two distinct file resources, the first one copies all needed files from the original location to a temp directory, then the ChangeConnectionString operates on web.config file present in this temp directory, finally another File Resource copies files from the temp directory to the real IIS directory.

 File TailspinSourceFilesShareToLocal
    {
        Ensure = "Present"  # You can also set Ensure to "Absent"
        Type = "Directory“ # Default is “File”
        Recurse = $true
        SourcePath = $AllNodes.SourceDir + "_PublishedWebsites\Tailspin.Web" # This is a path that has web files
        DestinationPath = "C:\temp\dev\tailspintoys" # The path where we want to ensure the web files are present
    }

    
    #now change web config connection string
    Script ChangeConnectionString 
    {
        SetScript =
        {    
            $path = "C:\temp\dev\tailspintoys\Web.Config"
            $xml = Get-Content $path 

            $node = $xml.SelectSingleNode("//connectionStrings/add[@name='TailspinConnectionString']")
            $node.Attributes["connectionString"].Value = "Data Source=localhost;Initial Catalog=TailspinToys;User=sa;pwd=123abcABC;Max Pool Size=1000"
            $xml.Save($path)
        }
        TestScript = 
        {
            $path = "C:\temp\dev\tailspintoys\Web.Config"
            $xml = Get-Content $path 

            $node = $xml.SelectSingleNode("//connectionStrings/add[@name='TailspinConnectionString']")
            $cn = $node.Attributes["connectionString"].Value
            $stateMatched = $cn -eq "Data Source=localhost;Initial Catalog=TailspinToys;User=sa;pwd=xxx;Max Pool Size=1000"
            return $stateMatched
        }
        GetScript = 
        {
            return @{
                GetScript = $GetScript
                SetScript = $SetScript
                TestScript = $TestScript
                Result = false
            }
        } 
    }
    
    
    File TailspinSourceFilesLocalToInetpub
    {
        Ensure = "Present"  # You can also set Ensure to "Absent"
        Type = "Directory“ # Default is “File”
        Recurse = $true
        SourcePath = "C:\temp\dev\tailspintoys" # This is a path that has web files
        DestinationPath = "C:\inetpub\dev\tailspintoys" # The path where we want to ensure the web files are present
    }

Now the ChangeConnectionString resource runs always, as we saw before, because each time that the File Resource runs it updates all the file with content of the original files. Changing this web.config file at each run is not a problem, because it is a temporary directory so not Worker Process Recycle happens. The final File Resource now works correctly and copies the files only if they are modified.This is what happens during the first run.

image

Figure 1:  During the first run all three resources were run, the first one copies files from the share to local temp, the second one changes web.config located in temp folder and finally the third one copies all files from temp folder to the folder monitored by IIS.If you run the configuration again without changing anything in the target node you got this result.

image

Figure 2: During second run, the first two resources are run, but the third one that actually copies file to the folder where the site resides was skipped, avoiding recycling the worker process.

The important aspect in previous picture is the third arrow, that highlight how the set part of the resource that copies files from temp directory to the local folder where IIS points is skipped, so no worker process recycle will happen. Thanks to this  simple change, now the script can be used even in a Pull process without too many changes.

Gian Maria.