Build and Deploy Asp.Net App with Azure DevOps

I’ve blogged in the past about deploying ASP.NET application, but lots of new feature changed in Azure DevOps and it is time to do some refresh of basic concepts. Especially in the field of web.config transform there is always lots of confusion and even if I’m an advocate of removing every configuration from files and source, it is indeed something that worth to be examined.

The best approach for configuration is removing then from source control, use configuration services, etc and move away from web.config.

But since most people still use web.config, lets start with a standard ASP.NET application with a Web.Config and a couple of application settings that should be changed during deploy.

image

Figure 1: Simple configuration file with two settings

When it is time to configure your release pipeline, you MUST adhere to the mantra: Build once, deploy many. This means that you should have one build that prepares the binaries to be installed, and the very same binaries will be deployed in several environment.

Since each environment will have a different value for app settings stored in web.config, I’ll start creating a web config transform for the Release configuration (then one that will be released), changing each configuration with a specific token.

image

Figure 2: Transformation file that tokenize the settings

In Figure 2 I show how I change the value of Key1 setting to __Key1__ and Key2 to __Key2__. This is necessary because I’ll replace these value with the real value during release.

The basic trick is changing configuration values in files during the build, setting some tokenized value that will be replaced during release. Using double underscore as prefix and suffix is enough for most situations.

Now it is time to create a build that generates the package to install. The pipeline is really simple, the solution is build with MsBuild with standard configuration for publishing web site. I’ve used MsBuid and not Visual Studio Task, because I do not want to have Visual Studio on my build agent to build, MsBuild is enough.

image

Figure 3: Build and publish web site with a standard MsBuild task.

If you run the build you will be disappointed because resulting web.config is not transformed, but it remains with the very content of the one in source control. This happens because transformation is an operation that is not done during standard web site publishing, but from Visual Studio when you use publish wizard. Luckly enough there is a task in preview that performs web.config transformation, you can simply place this task before MsBuild task and the game is done.

image

Figure 4: File transform task is in preview but it does its work perfectly

As you can see in Figure 4, you should simply specify the directory of the application, then choose XML transformation and finally the option to use web.$(BuildConfiguration).config transformation file to transform web.config.

Now you only need to copy the result of the publish into the artifact staging directory, then upload with standard upload artifact task.

image

Figure 5: Copy result of the publish task into staging directory and finally publish the artifact.

If you read other post of my blob you know that I usually place a PowerShell script that reorganize files, compress etc, but for this simple application it is perfectly fine to copy the _PublishedWebsites/ directory as build artifact.

image

Figure 6: Published artifacts after the build completes.

Take time to verify that the output of the build (Artifacts) is exactly what you expected before moving to configure the release.

Before going to build the release phase, please download the web.config file and verify that the substitution were performed and web.config contains what you expected.

image

Fiure 7: Both of my settings were substituted correctly.

Now it is time to create the release, but first of all I suggest you to install this extension  that contains a nice task to perform substitution during a release in an easy and intuitive way.

One of the great power of Azure DevOps is extensibility, there are tons of custom task to perform lots of different task, so take time and look in the marketplace if you are not able to find the Task you need from basic ones.

Lets start creating a simple release that uses the previous build as artifact, and contains two simple stages, dev and production.

image

Figure 8: Simple release with two stages to deploy the web application.

Each of the two stages have a simple two task job to deploy the application and they are based on the assumption that each environment was already configured (IIS installed, site configure etc), so, to deploy our asp.net app, we can simply overwrite the old installation folder and replace with the new binaries.

The Replace Token task comes in hand in this situation, you simply need to add as the first task of the job (before the task that copies file into IIS directory), then configure prefix and suffix with the two underscore to match criteria used to tokenize configuration in web.config

image

Figure 9: Configure replace token suffix and prefix to perform substitution.

In this example only web.config should be changed, but the task can perform substitution on multiple files.

image

Figure 10: Substition configuration points to web.config file.

The beautiful aspect of transform task is that it uses all the variables of the release to perform substitution. For each variable it replace token using prefix and suffix, this is the reason of my transformation release file in the build; my web.config file has __Key1__ and __Key2__ token inside configuration, so I can simply configure those two variables differently for the two environment and my release is finished.

If you use Grid visualization it is immediate to understand how each stage is configured.

image

Figure 11: Configure variables for each stage, the replace task will do the rest.

Everything is done, just trigger a release and verify that the web config of the two stages is changed accordingly.

image

Figure 12: Sites deployed in two stages with different settings, everything worked as expected.

Everything worked good, I was able to build once with web.config tokenization, then release the same artifacts in different stages with different configurations managed by release definition.

Happy AzDo

YAML Build in Azure DevOps

I’ve blogged in the past about YAML build in azure DevOps, but in that early days, that kind of build was a little bit rough and many people still preferred the old build based on visual editing in a browser. One of the main complaint was that the build was not easy to edit and there were some glitch, especially when it is time to access external services.

After months from the first version, the experience is really improved and I strongly suggest you to start trying to migrate existing build to this new system, to take advantage of having definition of build directly in the code, a practice that is more DevOps oriented and that allows you to have different build tasks for different branches.

YAML Build is now a first class citized in Azure DevOps and it is time to plan switching old build to the new engine.

You can simply start with an existing build, just edit it, select one of the phases (or the entire process) then press View YAML button to grab generated YAML build.

SNAGHTML2fecbc

Figure 1: Generate YAML definition from an existing build created with the standard editor

Now you can simply create a yaml file in any branch of your repository, paste the content in the file, commit to the branch and create a new build based on that file. I can clearly select not only AzDO repositories, but I can build also GitHub and GitHub enterprise

image

Figure 2: I can choose GitHub as source repository, not only azure repos

Then I can choose the project searching in all the project I have access to with my access token used to connect GitHub

image

Figure 3: Accessing GitHub repositories is simple, once you connected the acount with an access token AzDO can search in repositories

Just select a repository and select the option Existing Azure Pipelines, if you are starting from scratch you can create a starter pipeline.

image

Figure 4: Choose the option to use an existing pipeline.

You are ready to go, just choose branch and yaml file and the game is done.

image

Figure 5: You can directly specify the build file in the pipeline creation wizard.

Converting an existing build pipeline to YAML it is matter of no more than 10 minutes of your time.

Now you can simply run and edit your build directly from the browser, the whole process took no more than 10 minutes, including connecting my AzDO account to GitHub

image

Figure 6: Your build is ready to run inside Azure DevOPS.

Your build is now ready to run without any problem. If you specified triggers as in Figure 6 you can just push to the repository to have the build automatically kicks in and being executed. You can also directly edit the build in the browser, and pressing the Run button (Figure 6) you can trigger a run of the build without the need to push anything.

But the coolness of actual YAML build editor starts to shine when you start editing your build in the web editor, because you have intellisense, as you can see in Figure 7.

image

Figure 7: YAML build web editor has now intellisense.

As you can see the YAML build editor allows you to edit with full intellisense support, if you want to add a task, you can simply start writing task followed by a semicolon and the editor will suggest you all the tasks-available. When it is time to edit properties, you have intellisense and help for each task parameters, as well as help for the entire task. This is really useful because it immediately spots deprecated tasks (Figure 9)

image

Figure 8: All parameters can be edited with fully intellisense and help support

image

Figure 9: Helps gives you helps for the deprecation of old task that should not be used.

With the new web editor with intellisense, maintaining a YAML build is now easy and not more difficult than the standard graphical editor.

Outdated tasks are underlined in green, so you can immediately spot where the build definition is not optimal, as an example if I have a task that have a new version, the old version is underlined in green, and the intellisense suggests me that the value is  not anymore supported. This area still need some more love, but it works quite well.

There is no more excuses to still use the old build definition based on web editor, go and start converting everything to YAML definition, your life as bulid maintainer will be better :)

Gian Maria.

Sonar Analysis of Python with Azure DevOps pipeline

Once you have test and Code Coverage for your build of Python code, last step for a good build is adding support for Code Analysis with Sonar/SonarCloud. SonarCloud is the best option if your code is open source, because it is free and you should not install anything except the free addin in Azure Devops Marketplace.

From original build you need only to add two steps: PrepareAnalysis onSonarCloud and Run SonarCloud analysis, in the same way you do analysis for a .NET project.

image

Figure 1: Python build in Azure DevOps

You do not need to configure anything for a standard analysis with default options, just follow the configuration in Figure 2.:

image

Figure 2: Configuration of Sonar Cloud analysis

The only tricks I had to do is deleting the folder /htmlcov created by pytest for code coverage results. Once the coverage result was uploaded to Azure Devops server I do not needs it anymore and I want to remove it from sonar analysis. Remember that if you do not configure anything special for Sonar Cloud configuration it will analyze everything in the code folder, so you will end up with errors like these:

image

Figure 3: Failed Sonar Cloud analysis caused by output of code coverage.

You can clearly do a better job simply configuring Sonar Cloud Analysis to skip those folder, but in this situation a simple Delete folder task does the job.

To avoid cluttering SonarCloud analysis with unneeded files, you need to delete any files that were generated in the directory and that you do not want to analyze, like code coverage reports.

Another important settings is the Advances section, because you should specify the file containing code coverage result as extended sonar property.

image

Figure 4: Extra property to specify location of coverage file in the build.

Now you can run the build and verify that the analysis was indeed sent to SonarCloud.

image

Figure 5: After the build I can analyze code smells directly in sonar cloud.

If you prefer, like me, YAML builds, here is the complete YAML build definition that you can adapt to your repository.

queue:
  name: Hosted Ubuntu 1604

trigger:
- master
- develop
- features/*
- hotfix/*
- release/*

steps:

- task: UsePythonVersion@0
  displayName: 'Use Python 3.x'

- bash: |
   pip install pytest 
   pip install pytest-cov 
   pip install pytest-xdist 
   pip install pytest-bdd 
  displayName: 'Install a bunch of pip packages.'

- task: SonarSource.sonarcloud.14d9cde6-c1da-4d55-aa01-2965cd301255.SonarCloudPrepare@1
  displayName: 'Prepare analysis on SonarCloud'
  inputs:
    SonarCloud: SonarCloud
    organization: 'alkampfergit-github'
    scannerMode: CLI
    configMode: manual
    cliProjectKey: Pytest
    cliProjectName: Pytest
    extraProperties: |
     # Additional properties that will be passed to the scanner, 
     # Put one key=value per line, example:
     # sonar.exclusions=**/*.bin
     sonar.python.coverage.reportPath=$(System.DefaultWorkingDirectory)/coverage.xml

- bash: 'pytest --junitxml=$(Build.StagingDirectory)/test.xml --cov --cov-report=xml --cov-report=html' 
  workingDirectory: '.'
  displayName: 'Run tests with code coverage'
  continueOnError: true

- task: PublishTestResults@2
  displayName: 'Publish test result /test.xml'
  inputs:
    testResultsFiles: '$(Build.StagingDirectory)/test.xml'
    testRunTitle: 010

- task: PublishCodeCoverageResults@1
  displayName: 'Publish code coverage'
  inputs:
    codeCoverageTool: Cobertura
    summaryFileLocation: '$(System.DefaultWorkingDirectory)/coverage.xml'
    reportDirectory: '$(System.DefaultWorkingDirectory)/htmlcov'
    additionalCodeCoverageFiles: '$(System.DefaultWorkingDirectory)/**'

- task: DeleteFiles@1
  displayName: 'Delete files from $(System.DefaultWorkingDirectory)/htmlcov'
  inputs:
    SourceFolder: '$(System.DefaultWorkingDirectory)/htmlcov'
    Contents: '**'

- task: SonarSource.sonarcloud.ce096e50-6155-4de8-8800-4221aaeed4a1.SonarCloudAnalyze@1
  displayName: 'Run Sonarcloud Analysis'

The only settings you need to adapt is the name of the SonarCloud connection (in this example is called SonarCloud) you can add/change in Project Settings > Service Connections.

image

Figure 6: Service connection settings where you can add/change connection with Sonar Cloud Servers.

A possible final step is adding the Build Breaker extension to your account that allows you to made your build fails whenever the Quality Gate of SonarCloud is failed.

Thanks to Azure DevOps build system, creating a build that perform tests and analyze your Python code is extremely simple.

Happy Azure Devops.

Gian Maria

Deploy click-once application on Azure Blob with Azure DevOps

It was a long time ago I blogged on how to publish a click-once application from a VSTS Build to Azure Blob, long time was passed, and lots of stuff changed. The whole process is now simpler, thanks to many dedicated tasks that avoid doing any manual work.

My new build always start with a  GitVersion custom tasks, that populates some environment variables with version numbers generated by GitVersion, this will allow me to simply add an MsBuild task in the build to publish click-once using automatic GitVersion versioning.

image

Figure 1: MsBuild task to publish click once application

You need to build the csproj that contains the project, using $(BuildConfiguration) to build in the right configuration and adding custom argument arguments (3) as the following list

/target:publish 
/p:ApplicationVersion=$(AssemblyVersion)  
/p:PublishURL=https://aaaaaaa.blob.core.windows.net/atest/logviewer/  
/p:UpdateEnabled=true  
/p:UpdateMode=Foreground 
/p:ProductName=Jarvis.LogViewer 

ApplicationVersion is set to variable $(AssemblyVersion) that was populated by GitVersion task, publish Url is a simple address of an azure blob storage, that contains a blob container called atest, with a  folder named logviewer that has public read access.

image

Figure 2: Public read access for a simple blob in my azure subscription

This will allows me to have a simple public blob, with a public address, that everyone can read, so everyone can download my application.

Azure blob can be given a public read access to everyone, making your click-once application available to everyone.

MsBuild task simply creates a app.publish subfolder that contains all the files that needs to be copied into the blob azure storage, the first step is a Copy file Task to copy them in the artifacts directory, as for the following picture.

image

Figure 3: All files of click-once publishing were copied in the artifacts staging directory.

Now everything is ready, just add a final task of type Azure File Copy and configure to copy everything from artifacts directory to the right subfolder of azure blob.

SNAGHTML18f3d81

Figure 4: Configuration of Azure Blob File to deploy click-once generated files.

Configuration is really simple, because thanks to a direct connection to Azure Subscription (3) we can simply connect Azure Dev Ops to an azure subscription so we can simply select storage account (4) blob name (5) and subfolder of the blob (6).

One of the advantage of the dedicated Azure File Copy Task, is that it can simply use one of  the Azure account linked to the Azure DevOps account, making super-simple to choose right blob where to upload click-once package.

Once you’ve finished configuration, you can simply run a build and your application will be automatically published.

image

Figure 5: Summary of the build with click once publishing.

Now you can simply point to the setup.exe file with the address of the blob and the game is done.

Gian Maria.

Run code coverage for Python project with Azure DevOps

Creating a simple build that runs Python tests written with PyTest framework is really simple, but now the next step is trying to have code coverage. Even if I’m pretty new to Python, having code coverage in a build is really simple, thanks to a specific task that comes out-of-the-box with Azure DevOps: Publish Code Coverage.

In Azure DevOps you can create build with Web Editor or with simple YAML file, I prefer YAML but since I’ve demonstrated in the old post YAML build for Python, now I’m creating a simple build with standard Web Editor

Instead of creating a Yaml Build, this time I’m going to demonstrate a classic build: here is the core part of the build.

image

Figure 1: Core build to run tests and have code coverage uploaded to Azure DevOps

As you can see, I decided to run test with a Bash script running on Linux, here is the task configuration where I’ve added Pytest options to have code coverage during test run.

image

Figure2: Configuration of Bash script to run Pytests

The task is configured to run an inline script (1), command line (2) contains –cov options to specify Python modules I want to monitor for code coverage, then a couple of –cov-report options to have output in xml and HTML format. Finally I’ve specified the subfolder that contains the module I want to test (3) and finally I’ve configured the task con Continue on Error (4), so if some of the tests fails the build will be marked as Partially failed.

Thanks to Pytest running code coverage is just a matter of adding some options to command line

After a build finished you can find in the output how Pytest generates Code Coverage reporting, it create a file called coverage.xml then an entire directory called htmlcov that contains a report for code coverage.

image

Figure 3: Result of running tests with code coverage.

If you look at Figure 1 you can see that the build final task is a Publish Code Coverage Task, whose duty is to grab output of the Pytest run and upload to the server. Configuration is really simple, you need to choose Cobertura as Code coverage tool (the format used by Pytest) and the output of test run. Looking at output of Figure 3 you can double check that the summary file is called coverage.xml and all the report directory is in htmlcov subdirectory.

image

Figure 4: Configuration for Publish Code Coverage task.

Once you run the build, you can find Code Coverage result on the summary page, as well as Code Coverage Report published as Build artifacts, the whole configuration will take you no more than 10 minutes.

image

Figure 5: Artifacts containing code coverage reports as well as code coverage percentage are accessible from Build Summary page.

Finally you have also a dedicated tab for Code Coverage, showing the HTML summary of the report

image

Figure 6: Code coverage HTML report uploaded in a dedicated Code Coverage tab in build result

Even if the code coverage output is not perfectly formatted you can indeed immediately verify percentage of code coverage of your test.

Gian Maria.