Is Manual Release in Azure DevOps useful?

When people creates a release in AzureDevOps, they primarily focus on how to make the release automatic, but to be 100% honest, automation in only one side of the release, and probably not the more useful.

First of all Release is about auditing and understand which version of the software is released where and by whom. In this scenario what is more important is “how I can deploy my software in production”.

image

Figure 1: Simple release composed by two steps.

In Figure 1 I created a simple release with two stages, this will clearly states that to go to production I need to deploy on a Test Machine stage, then Deploy to production. I do not want to give you a full tutorial, MSDN is full of nice example, but if you look at the picture, you can notice small user icons before each stage, that allows you to specify who can approve a release in that stage or if the release should start automatically.

What is more important when you plan the release, is not how you can automate the deployment, but how to structure release flow: stages, people, etc.

As I strongly suggests, even if you do not have any idea on how to automate the release, you MUST have at least a release document, that contains detailed instruction on how to install the software. When you have a release document, you can simply add that document to source control and create a release definition completely manual.

If the release document is included in the source control, you can simply store as result of build artifacts, then it will be automatically downloaded and you can simply create a release like this. In Figure 2 you can see a typical two phase release for stages defined in Figure 1.

image

Figure 2: A manual release uses a copy file to copy artifacts to a well known location then a manual step.

I usually have an agent based phase because I want to copy artifacts data from the agent directory to a well known directory. Agent directory is clumsy, and can be deleted by cleanup so I want my release files to be copied in a folder like c:\release (Figure 3)

image

Figure 3: The only automatic task is copying all the artifacts to c:\release folder

After this copy, I have another phase, this time it is agentless, because it has only a Manual deploy action, that simply specify to the user where to find the instruction for manual deploy.

image

Figure 4: Manual step in the release process

This is important because as you can see in Figure 4 I can instruct the user on where to find release document (it is in source code, the build add to the artifact, and finally it is copied in the release folder). Another important aspect is the ability to notify specific user to perform manual task.

Having a manual release document stored in source code allows you to evolve the file along with the code. Each code version has correct release document.

Since I use GitVersion to change name of the build based on GitVersion tags I prefer going to the Options tab of the release and change the name of the release.

image

Figure 5: Configure release name format to include BuildNumber

Release usually have a simple increasing number $(Rev:rr), but having something like Release-34 as release name does not tell me anything useful. If you are curious on what you can use in the Release Name Format field, you can simply check official documentation. From that document you read that you can use BuildNumber, that will contain build number of first artifact of the build. In my opinion this is a real useful information if the build name contains GitVerison tags, it allows you to have a meaningful release name.

SNAGHTML25af5e9

Figure 6: New release naming in action.

If you look at Figure 6 you can argue that build name is visible below release number, so the new numbering method (1) does not add information respect the standard numbering with only increase number (2).

This is true until you do not move to the deployment group or in other UI of Release Management, because there are many places where you can see only release Name. If you look at Figure 7 you can verify that with the old numbering system I see the release number for each machine, and for machine (1) I know that the latest release is Release 16, while for a machine where I release after numbering change I got Release – 34 – JarvisPackage 4.15.11

image

Figure 7: Deployment groups with release names

Thanks to release file definition and Azure DevOps release management, I have a controlled environment where I can know who started a release, which are the artifact deployed to each environment and who perform manual deploy.

Having a release definition completely manual is a great starting point, because it states what, who and how your software must be deployed, and will log every release, who started it, who performed manual  release, who approved release for every stage, etc.

Once everything works, I usually start writing automation script to automate steps of the release document. Each time a step is automated, I remove it from the deploy document or mark explicitly as “not to do because it is automated”.

Happy DevOps.

WIQL editor extension For Azure DevOps

One of the nice feature of Azure DevOps is extendibility, thanks to REST API you can write addins or standalone programs that interacts with the services . One of the addin that I like the most is the Work Item Query Language Editor, a nice addin that allows you to interact directly with the underling syntax of Work Item query.

Once installed, whenever you are in query Editor, you have the ability to directly edit the query with WIQL syntax, thanks to the “Edit Query wiql” menu entry.

image

Figure 1: Wiql query editor new menu entry in action

As you can see in Figure 2, there are lots of nice feature in this addin, not only the ability to edit a query directly in WIQL syntax.

image

Figure 2: WIQL editor in action

You can clearly edit and save the query (3) but you can also export the query into a file that will be downloaded into your pc, and you can then re-import in a different Team Project. This is a nice function if you want to store some typical queries somewhere (source control) then re-import in different Team Project, or for different organization.

If you start editing the query, you will be amazed by intellisense support (Figure 3), that guides you in writing correct query, and it is really useful because it contains a nice list of all available fields.

image

Figure 3: Intellisense in action during Query Editor.

The intellisense seems to actually using API to grab a list of all the valid fields, because it suggests you even custom fields that you used in your custom process. The only drawback is that it lists all the available fields, not only those one available in the current Team Project, but this is a really minor issue.

Having intellisense, syntax checking and field suggestion, this addin is a really must to install in your Azure DevOps instance.

image

Figure 4: Intellisense is available not only on default field, but also on custom fields used in custom process.

If you are interested in the editor used, you can find that this addin uses the monaco editor, another nice piece of open source software by Microsoft.

Another super cool feature of this extension, is the Query Playground, where you can simply type your query, execute it and visualize result directly in the browser.

image

Figure 5: Wiql playground in action, look at the ASOF operator used to issue query in the past.

As you can see from Figure 5, you can easily test your query, but what is most important, the ASOF operator is fully supported and this guarantees you the ability to do historical queries directly from the web interface, instead of resorting using the API. If you need to experiment with WIQL and you need to quick create and test a WIQL query, this is the tool to go.

I think that this addin is really useful, not only if you are interacting with the service with REST API and raw WIQL, but also because it allows you to export/import queries between projects/organization and allows you to execute simply historycal queries directly from the ui.

Having the full support of WIQL allows you to use features that are not usually available through the UI, like the ASOF operator.

As a last trick, if you create a query in the web UI, then edit with this addin and add ASOF operator then save, the asof will be saved in the query, so you have an historical query executable from the UI. The only drawback is that, if you modify the query with the web editor and then save, the ASOF operator will be removed.

Gian Maria.

Change Work Item Type in a fresh installation of Azure DevOps server

If you want to use Azure DevOps I strongly suggest you to use cloud version https://dev.azure.com, but if you really need to have it on premise, you can install Team Foundation Server, now renamed to Azure DevOps Server.

One of the most waited feature for the on-premise version is the ability to change work item Type and to move work item between project, a feature present in Azure DevOps Server, but that needs a complete disable of Reporting Services to work, as I discussed in an old Post.

In that very post I had a comment telling me that after a fresh installation of Azure DevOps Server, even if he did not configured reporting services, the option to move a Work Item Between Team Project is missing, as well as the option to change Work Item Type. The problem is, until you do not explicitly disable reporting on the TFS instance those two options are not available. This is probably due to avoiding using these feature, then subsequently enable Reporting ending with incorrect data in the warehouse.

First of all we need to clarify some radical change in Azure DevOps 2019 respect former version TFS 2018.

Azure DevOps Server has a couple of different type of Project Collection, one is the classic one with an XML process, the new one is the one based on process inheritance.

image 

Figure 1: Different type of Project Collection in Azure Devops

If you check Figure 1, you can verify that an  inheritance based project collection does not use with Sql Server Anlysis services and reporting; thus you can always change Team Project or type because reporting is not used in these type of collection. As you can see in Figure 2, if I have a project collection based on Inheritance model, I can change work item type even if Reporting is configured.

image

Figure 2: Project collection based on inheritance model are not affected by reporting services configuration.

If you instead create a new collection using the old XML process model, even if you have not configured reporting services, the ability to Change Type or Move Between team project is not present. This happens because, even if you had not configured reporting, you must explicitly disable that feature, to prevent it to be reactivated in the future and have some erratic report.

image

Figure 3: Even if you did not configure reporting for Azure DevOps server, the option to change Team Project and Change type are not available

To enable Move Between Team Project and Change Work Item Type you really need to explicitly disable reporting, as shown in Figure 3 and Figure 4

If you disable reporting the system is warning you that the reporting options could not be enabled anymore.

image

Figure 4: A confirmation dialog warn that disabling Reporting is an option that cannot be undone

As soon reporting is disabled, you can change Type and Move to other Team Project.

image

Figure 5: When reporting is explicitly disabled, you immediately have the two options enabled.

Happy Azure Devops.

Gian Maria.

Import Work Item from external system to Azure DevOps

In previous post I’ve dealt with exporting Work Item information in Word file with AzureDevOps API, now I want to deal with the inverse operation, importing data from external service into Azure DevOps.

If the source service is a Team Foundation Server, you can use the really good tool by Naked Agility Ltd you can find in marketplace, you can also have a shot at the official migration tool if you need to migrate an entire collection, but if you have some data to import from an external system, using API can be a viable solution.

I’ve created a simple project in Github to demonstrate basic usage of API to import data into Azure DevOps (both server and online version), where I’m dealing only with the Azure DevOps parts, leaving to the user the burden to implement code to extract data from source system.

If you need to import data to a System, it is better not to assume where the data is coming from. Having only to deal with the import part, you leave the options to other to do the work of getting data from the source server.

The code uses a MigrationItem class to store all the information we support to migrate data to Azure DevOps, this class contains only a string field to identify the unique id in the source system, as well as the work item type to create. Then it contains a list of MigrationItemVersion that will represent the content of the data in the original system during the time. In this proof of concept I support only Title description and the date of the modification. This structure is needed because I want to migrate full history from the original system, not only the snapshot of latest version, so I need to know how the original data was during the time.

public class MigrationItem
{
    public MigrationItem()
    {
        _versions = new List<MigrationItemVersion>();
    }

    /// <summary>
    /// This is the original Id of original system to keep track of what was already
    /// imported.
    /// </summary>
    public String OriginalId { get; set; }

    /// <summary>
    /// You need to specify type of workitem to be used during import.
    /// </summary>
    public String WorkItemDestinationType { get; set; }

    private readonly List<MigrationItemVersion> _versions;

To make this code run you need to do a couple of thing, first of all you need to create a customized process from one of the three base process and add at least one text field to store the original id.

image

Figure 1: Adding a custom field to store the original id of  imported work item

This field is really important, because will help the user to correlate Imported Work Item to data in the original system, thus allowing import tool to identify created work Item to fix or re-import some of the Work Items.

Whenever you import or move data between systems, at least you need to have a unique identifier of data in the original system to be stored into the imported Work Item.

image

Figure 2: Custom original id on a imported Work Item.

As you can see in Figure 2 this information is readonly, because it is used only by the importer and should never be changed by a Human, you can obtain this with a simple Work Item rule.

image

Figure 3: Make OriginalId field Readonly

This will allow me to avoid users to mess and change this number. You can appreciate how easy it is to modify the template and create new fields and rules on Azure DevOps.

Thanks to process inheritance, it is really simple to modify the process in Azure DevOps adding information specific to own process, like the id of original item in case of an import.

Now that everything is in place you need to add the user that is performing the import as member of Project Collection Service Account, this specific group will allow the user to perform action on behalf of other. This is a very special permission, but you should use it if you want to perform a migration with high fidelity.

Once migration is finished, you can remove the user from the Project Collection Service Account and restore it to standard permission. Remember, since the sample project uses Access token to authenticate, be sure that the user that generated the token is a member of Project Collection Service Account group before running the test migration tool.

Being part of Project Collection Service Account allows a user to impersonate others as well as bypass some of the validation rules for the Work Item. This is needed to perform a change to Work Item in the past and on behalf of other user.

Now it came the code part, that, thanks to Azure Devops API it is really simple. I do not descend into details of connection and interaction, because it was something I already discussed in another post, this time I’m interested in how I can create a new Work Item and save multiple versions of it to recreate the history.

public async Task<Boolean> ImportWorkItemAsync(MigrationItem itemToMigrate)
{
    var existingWorkItem = GetWorkItem(itemToMigrate.OriginalId);
    if (existingWorkItem != null)
    {
        Log.Information("A workitem with originalId {originalId} already exists, it will be deleted", itemToMigrate.OriginalId);
        connection.WorkItemStore.DestroyWorkItems(new[] { existingWorkItem.Id });
    }

The core method is the ImportWorkItemAsync, that takes a MigrationItem to create the new work item. In the very first line I simply look for a work item that was already bound to that external item, if it is present I simply destroy it. This approach is radical, but it allows me to issue multiple test import runs without the hassle to deleting everything before each import. More important, if some of the Work Item where imported not correctly, I can simply re-export them and the corresponding Work Item will be recreated correctly.

private WorkItem GetWorkItem(String originalId)
{
    var existingWorkItems = connection
        .WorkItemStore
        .Query($@"select * from  workitems where {fieldWithOriginalId} = '" + originalId + "'");
    return existingWorkItems.OfType<WorkItem>().FirstOrDefault();
}

The nice stuff about customization, is that I can query Work Item Store using condition on my new defined field. To leave everything flexible, I can specify the name of the field created in Figure 1 to command line. The whole command line to the example looks like this.

--address https://gianmariaricci.visualstudio.com 
--tokenfile C:\develop\Crypted\patOri.txt 
--teamproject TestMigration 
--originalIdField custom_originalId

Once we are sure that the system does not contains a Work Item related to that external id,the code will create a new Work Item In memory

private WorkItem CreateWorkItem(MigrationItem migrationItem)
{
    WorkItemType type = null;
    try
    {
        type = teamProject.WorkItemTypes[migrationItem.WorkItemDestinationType];
    }
    catch (WorkItemTypeDeniedOrNotExistException) { }//ignore the exception will be logged  

    if (type == null)
    {
        Log.Error("Unable to find work item type {WorkItemDestinationType}", migrationItem.WorkItemDestinationType);
        return null;
    }

    WorkItem workItem = new WorkItem(type);
    Log.Information("Created Work Item for type {workItemType} related to original id {originalId}", workItem.Type.Name, migrationItem.OriginalId);

    //now start creating basic value that we need, like the original id 
    workItem[fieldWithOriginalId] = migrationItem.OriginalId;
    return workItem;
}

The type of the Work Item to create is part of MigrationItem information, and the code simply verify that such type of Work Item really exists in current team project. If everything is ok, the code create a new WorkItem in memory using that type, then populate the original id field. This structure allows me to query the original system, then for each Migration Item I can decide destination type in Azure Devops.

Last step is iterate through all the versions of the original item, and save every change to Work Item store to recreate the history of the original MigrationItem.

//now that we have work item, we need to start creating all the versions
for (int i = 0; i < itemToMigrate.Versions.Count(); i++)
{
    var version = itemToMigrate.GetVersionAt(i);
    workItem.Fields&#91;"System.ChangedDate"&#93;.Value = version.VersionTimestamp;
    workItem.Fields&#91;"System.ChangedBy"&#93;.Value = version.AuthorEmail;
    if (i == 0)
    {
        workItem.Fields&#91;"System.CreatedBy"&#93;.Value = version.AuthorEmail;
        workItem.Fields&#91;"System.CreatedDate"&#93;.Value = version.VersionTimestamp;
    }
    workItem.Title = version.Title;
    workItem.Description = version.Description;
    var validation = workItem.Validate();
    if (validation.Count > 0)
    {
        Log.Error("N°{errCount} validation errors for work Item {workItemId} originalId {originalId}", validation.Count, workItem.Id, itemToMigrate.OriginalId);
        foreach (Field error in validation)
        {
            Log.Error("Version {version}: We have validation error for work Item {workItemId} originalId {originalId} - Field: {name} ErrorStatus {errorStatus} Value {value}", i, workItem.Id, itemToMigrate.OriginalId, error.Name, error.Status, error.Value);
        }
        return false;
    }
    workItem.Save();
    if (i == 0)
    {
        Log.Information("Saved for the first time Work Item for type {workItemType} with id {workItemId} related to original id {originalId}", workItem.Type.Name, workItem.Id, itemToMigrate.OriginalId);
    }
    else
    {
        Log.Debug("Saved iteration {i} for original id {originalId}", i, itemToMigrate.OriginalId);
    }
}

return true;

The above code migrate only the Title, but nevertheless allows me to verify that I’m able to import not only a snapshot of Work Item, but a full history. As you can see this is really simple, for each iteration I only need to populate System.ChangedDate and System.ChangedBy, and in the first iteration I can also set System.CreatedBy.

One rule is in place, you cannot save with a date that is not greater that the date used for last save, this will force you to import all the versions in the correct order. Except this you can simply save with a date in the past and as a different user.

Before saving Work Item I simply issue a call to the Validate() method, that is able to determine any validation error before saving the Work Item, in case of error, I logged it then return false to inform the caller that Work Item was not fully imported.

This is a simple test to export some bogus data.

MigrationItem mi = new MigrationItem();
mi.OriginalId = "AA123";
mi.WorkItemDestinationType = "Product Backlog Item";
mi.AddVersion(new MigrationItemVersion()
{
    AuthorEmail = "alkampfer@outlook.com",
    Description = "Description",
    Title = "Title test",
    VersionTimestamp = new DateTime(2010, 01, 23, 22, 10, 32),
});

mi.AddVersion(new MigrationItemVersion()
{
    AuthorEmail = "alkampfer@outlook.com",
    Description = "Description",
    Title = "Title Modified",
    VersionTimestamp = new DateTime(2011, 01, 23, 22, 10, 32),
});

mi.AddVersion(new MigrationItemVersion()
{
    AuthorEmail = "alkampfer@outlook.com",
    Description = "Description",
    Title = "Title Modified Again",
    VersionTimestamp = new DateTime(2011, 01, 23, 22, 10, 32),
});

var importResult = importer.ImportWorkItemAsync(mi).Result;

Launching test program I view this output:

image

Figure 4: Export output.

This is not the first run, so the logs informed me that an item bound to that original Id was already created and it was deleted, then informed me that a new Work Item is created and then updated for two times.

image

Figure 2: Work Item was correctly created in Azure Devops Collection

The nice stuff is that the work item History reports all the modification with the correct Date, and this can be done only because the user that generated the token is member of Collection Service Account group. (Do not forget to remove the user after finishing the import, or use a specific user for the import).

Thanks to the great flexibility of Azure DevOps API you can easily imports data with full history to have a migration with higher fidelity than exporting last snapshot of the data.

The only indication that the action was performed by an import is that the history shoved (Via Gian MAria Ricci –aka Alkampfer) that indicates that the action was not really performed by Alkampfer the Zoalord, but was instead done with impersonation. Nevertheless you can maintain full history in the code.

You can find full example In GitHub

Happy Azure DevOps.

Gian Maria.

Sonar Analysis of Python with Azure DevOps pipeline

Once you have test and Code Coverage for your build of Python code, last step for a good build is adding support for Code Analysis with Sonar/SonarCloud. SonarCloud is the best option if your code is open source, because it is free and you should not install anything except the free addin in Azure Devops Marketplace.

From original build you need only to add two steps: PrepareAnalysis onSonarCloud and Run SonarCloud analysis, in the same way you do analysis for a .NET project.

image

Figure 1: Python build in Azure DevOps

You do not need to configure anything for a standard analysis with default options, just follow the configuration in Figure 2.:

image

Figure 2: Configuration of Sonar Cloud analysis

The only tricks I had to do is deleting the folder /htmlcov created by pytest for code coverage results. Once the coverage result was uploaded to Azure Devops server I do not needs it anymore and I want to remove it from sonar analysis. Remember that if you do not configure anything special for Sonar Cloud configuration it will analyze everything in the code folder, so you will end up with errors like these:

image

Figure 3: Failed Sonar Cloud analysis caused by output of code coverage.

You can clearly do a better job simply configuring Sonar Cloud Analysis to skip those folder, but in this situation a simple Delete folder task does the job.

To avoid cluttering SonarCloud analysis with unneeded files, you need to delete any files that were generated in the directory and that you do not want to analyze, like code coverage reports.

Another important settings is the Advances section, because you should specify the file containing code coverage result as extended sonar property.

image

Figure 4: Extra property to specify location of coverage file in the build.

Now you can run the build and verify that the analysis was indeed sent to SonarCloud.

image

Figure 5: After the build I can analyze code smells directly in sonar cloud.

If you prefer, like me, YAML builds, here is the complete YAML build definition that you can adapt to your repository.

queue:
  name: Hosted Ubuntu 1604

trigger:
- master
- develop
- features/*
- hotfix/*
- release/*

steps:

- task: UsePythonVersion@0
  displayName: 'Use Python 3.x'

- bash: |
   pip install pytest 
   pip install pytest-cov 
   pip install pytest-xdist 
   pip install pytest-bdd 
  displayName: 'Install a bunch of pip packages.'

- task: SonarSource.sonarcloud.14d9cde6-c1da-4d55-aa01-2965cd301255.SonarCloudPrepare@1
  displayName: 'Prepare analysis on SonarCloud'
  inputs:
    SonarCloud: SonarCloud
    organization: 'alkampfergit-github'
    scannerMode: CLI
    configMode: manual
    cliProjectKey: Pytest
    cliProjectName: Pytest
    extraProperties: |
     # Additional properties that will be passed to the scanner, 
     # Put one key=value per line, example:
     # sonar.exclusions=**/*.bin
     sonar.python.coverage.reportPath=$(System.DefaultWorkingDirectory)/coverage.xml

- bash: 'pytest --junitxml=$(Build.StagingDirectory)/test.xml --cov --cov-report=xml --cov-report=html' 
  workingDirectory: '.'
  displayName: 'Run tests with code coverage'
  continueOnError: true

- task: PublishTestResults@2
  displayName: 'Publish test result /test.xml'
  inputs:
    testResultsFiles: '$(Build.StagingDirectory)/test.xml'
    testRunTitle: 010

- task: PublishCodeCoverageResults@1
  displayName: 'Publish code coverage'
  inputs:
    codeCoverageTool: Cobertura
    summaryFileLocation: '$(System.DefaultWorkingDirectory)/coverage.xml'
    reportDirectory: '$(System.DefaultWorkingDirectory)/htmlcov'
    additionalCodeCoverageFiles: '$(System.DefaultWorkingDirectory)/**'

- task: DeleteFiles@1
  displayName: 'Delete files from $(System.DefaultWorkingDirectory)/htmlcov'
  inputs:
    SourceFolder: '$(System.DefaultWorkingDirectory)/htmlcov'
    Contents: '**'

- task: SonarSource.sonarcloud.ce096e50-6155-4de8-8800-4221aaeed4a1.SonarCloudAnalyze@1
  displayName: 'Run Sonarcloud Analysis'

The only settings you need to adapt is the name of the SonarCloud connection (in this example is called SonarCloud) you can add/change in Project Settings > Service Connections.

image

Figure 6: Service connection settings where you can add/change connection with Sonar Cloud Servers.

A possible final step is adding the Build Breaker extension to your account that allows you to made your build fails whenever the Quality Gate of SonarCloud is failed.

Thanks to Azure DevOps build system, creating a build that perform tests and analyze your Python code is extremely simple.

Happy Azure Devops.

Gian Maria