Is Manual Release in Azure DevOps useful?

When people creates a release in AzureDevOps, they primarily focus on how to make the release automatic, but to be 100% honest, automation in only one side of the release, and probably not the more useful.

First of all Release is about auditing and understand which version of the software is released where and by whom. In this scenario what is more important is “how I can deploy my software in production”.

image

Figure 1: Simple release composed by two steps.

In Figure 1 I created a simple release with two stages, this will clearly states that to go to production I need to deploy on a Test Machine stage, then Deploy to production. I do not want to give you a full tutorial, MSDN is full of nice example, but if you look at the picture, you can notice small user icons before each stage, that allows you to specify who can approve a release in that stage or if the release should start automatically.

What is more important when you plan the release, is not how you can automate the deployment, but how to structure release flow: stages, people, etc.

As I strongly suggests, even if you do not have any idea on how to automate the release, you MUST have at least a release document, that contains detailed instruction on how to install the software. When you have a release document, you can simply add that document to source control and create a release definition completely manual.

If the release document is included in the source control, you can simply store as result of build artifacts, then it will be automatically downloaded and you can simply create a release like this. In Figure 2 you can see a typical two phase release for stages defined in Figure 1.

image

Figure 2: A manual release uses a copy file to copy artifacts to a well known location then a manual step.

I usually have an agent based phase because I want to copy artifacts data from the agent directory to a well known directory. Agent directory is clumsy, and can be deleted by cleanup so I want my release files to be copied in a folder like c:\release (Figure 3)

image

Figure 3: The only automatic task is copying all the artifacts to c:\release folder

After this copy, I have another phase, this time it is agentless, because it has only a Manual deploy action, that simply specify to the user where to find the instruction for manual deploy.

image

Figure 4: Manual step in the release process

This is important because as you can see in Figure 4 I can instruct the user on where to find release document (it is in source code, the build add to the artifact, and finally it is copied in the release folder). Another important aspect is the ability to notify specific user to perform manual task.

Having a manual release document stored in source code allows you to evolve the file along with the code. Each code version has correct release document.

Since I use GitVersion to change name of the build based on GitVersion tags I prefer going to the Options tab of the release and change the name of the release.

image

Figure 5: Configure release name format to include BuildNumber

Release usually have a simple increasing number $(Rev:rr), but having something like Release-34 as release name does not tell me anything useful. If you are curious on what you can use in the Release Name Format field, you can simply check official documentation. From that document you read that you can use BuildNumber, that will contain build number of first artifact of the build. In my opinion this is a real useful information if the build name contains GitVerison tags, it allows you to have a meaningful release name.

SNAGHTML25af5e9

Figure 6: New release naming in action.

If you look at Figure 6 you can argue that build name is visible below release number, so the new numbering method (1) does not add information respect the standard numbering with only increase number (2).

This is true until you do not move to the deployment group or in other UI of Release Management, because there are many places where you can see only release Name. If you look at Figure 7 you can verify that with the old numbering system I see the release number for each machine, and for machine (1) I know that the latest release is Release 16, while for a machine where I release after numbering change I got Release – 34 – JarvisPackage 4.15.11

image

Figure 7: Deployment groups with release names

Thanks to release file definition and Azure DevOps release management, I have a controlled environment where I can know who started a release, which are the artifact deployed to each environment and who perform manual deploy.

Having a release definition completely manual is a great starting point, because it states what, who and how your software must be deployed, and will log every release, who started it, who performed manual  release, who approved release for every stage, etc.

Once everything works, I usually start writing automation script to automate steps of the release document. Each time a step is automated, I remove it from the deploy document or mark explicitly as “not to do because it is automated”.

Happy DevOps.

Run code coverage for Python project with Azure DevOps

Creating a simple build that runs Python tests written with PyTest framework is really simple, but now the next step is trying to have code coverage. Even if I’m pretty new to Python, having code coverage in a build is really simple, thanks to a specific task that comes out-of-the-box with Azure DevOps: Publish Code Coverage.

In Azure DevOps you can create build with Web Editor or with simple YAML file, I prefer YAML but since I’ve demonstrated in the old post YAML build for Python, now I’m creating a simple build with standard Web Editor

Instead of creating a Yaml Build, this time I’m going to demonstrate a classic build: here is the core part of the build.

image

Figure 1: Core build to run tests and have code coverage uploaded to Azure DevOps

As you can see, I decided to run test with a Bash script running on Linux, here is the task configuration where I’ve added Pytest options to have code coverage during test run.

image

Figure2: Configuration of Bash script to run Pytests

The task is configured to run an inline script (1), command line (2) contains –cov options to specify Python modules I want to monitor for code coverage, then a couple of –cov-report options to have output in xml and HTML format. Finally I’ve specified the subfolder that contains the module I want to test (3) and finally I’ve configured the task con Continue on Error (4), so if some of the tests fails the build will be marked as Partially failed.

Thanks to Pytest running code coverage is just a matter of adding some options to command line

After a build finished you can find in the output how Pytest generates Code Coverage reporting, it create a file called coverage.xml then an entire directory called htmlcov that contains a report for code coverage.

image

Figure 3: Result of running tests with code coverage.

If you look at Figure 1 you can see that the build final task is a Publish Code Coverage Task, whose duty is to grab output of the Pytest run and upload to the server. Configuration is really simple, you need to choose Cobertura as Code coverage tool (the format used by Pytest) and the output of test run. Looking at output of Figure 3 you can double check that the summary file is called coverage.xml and all the report directory is in htmlcov subdirectory.

image

Figure 4: Configuration for Publish Code Coverage task.

Once you run the build, you can find Code Coverage result on the summary page, as well as Code Coverage Report published as Build artifacts, the whole configuration will take you no more than 10 minutes.

image

Figure 5: Artifacts containing code coverage reports as well as code coverage percentage are accessible from Build Summary page.

Finally you have also a dedicated tab for Code Coverage, showing the HTML summary of the report

image

Figure 6: Code coverage HTML report uploaded in a dedicated Code Coverage tab in build result

Even if the code coverage output is not perfectly formatted you can indeed immediately verify percentage of code coverage of your test.

Gian Maria.

Set new Azure DevOps url for your account

Due to switching from the old url format organization.visualstudio.com to dev.azure.com/organization it is a good practice to start transition to the new url as soon as possible. The old url will continue to function for a long time, but the new official domain is going to become the default.

Every user can still use both the new or old domain name, but there is a settings in the general setting page of the account that globally enable the new url.

image

Figure 1: New url format enabled in global settings.

Actually there are small parts of the site that does not work perfectly if you browse Azure DevOps with the url not configured in that settings. This is often due to security restriction especially for Cross Site Origin. As an example I got problem with the new url in the release page, because some of the script will be still served with organization.visualstudio.com and they got blocked due to CORS

image

Figure 2: Cross origin request blocked due to security plugin.

This will prevent me to correctly use the new address. After you switch to the new url with the settings shown in Figure 1: The problem is gone.

This setting will also made a redirect to the new url, whenever one is typing organization.visualstudio.com they will be redirected to the new url, thus enforcing the usage of the new url automatically.

Gian Maria.

VSTS Name change in Azure DevOps effects on Git repositories

As I blogged in the past, it is super easy to build a VSTS Build (Now Azure DevOps Pipeline) to keep two repositories in sync. In that article one of the step is pushing the new code to the destination repositories with an url like: https://$(token)@myaddress.visualstudio.com/DefaultCollection, to automatically include a token to authenticate in the destination repository.

Now some of my build started to fail due to timeout and I immediately suspected the reason: the name change from VSTS to Azure DevOps changed the base url from accountname.visualstudio.com to dev.azure.com/accountname, and this in some way broke the bulid.

Due to rebranding of VSTS into Azure DevOps and change of Url you need to pay attention if some extension or build broke due to usage of the old url.

The solution is Super Simple, you need to go to the Repository page of your account using new dev.azure.com address and find the new address of the repository, something like: https://accountname@dev.azure.com/accountname/teamproject/_git/repositoryName. You just need to change adding a semicolon and valid auth token after the accountname, so it is now like this: https://organization:$(Token)/dev.azure…..

image 

Figure 1: New string format to push force to a destination repository

Since I have many mirror build, I want a centralized way to securely store thetoken value to have all the build take this value from a centralized location; the obvious solution is a variable group linked to this build.

image

Figure 2: Variable group linked to this build.

In the variable group I have a single variable called Token, that is secure and it is used by many builds, so each time the token expire, I can change this and all the builds will use the new value.

image

Figure 1: A simple variable group that contains a single secure variable.

That’s all.

Gian Maria.