Import Work Item from external system to Azure DevOps

In previous post I’ve dealt with exporting Work Item information in Word file with AzureDevOps API, now I want to deal with the inverse operation, importing data from external service into Azure DevOps.

If the source service is a Team Foundation Server, you can use the really good tool by Naked Agility Ltd you can find in marketplace, you can also have a shot at the official migration tool if you need to migrate an entire collection, but if you have some data to import from an external system, using API can be a viable solution.

I’ve created a simple project in Github to demonstrate basic usage of API to import data into Azure DevOps (both server and online version), where I’m dealing only with the Azure DevOps parts, leaving to the user the burden to implement code to extract data from source system.

If you need to import data to a System, it is better not to assume where the data is coming from. Having only to deal with the import part, you leave the options to other to do the work of getting data from the source server.

The code uses a MigrationItem class to store all the information we support to migrate data to Azure DevOps, this class contains only a string field to identify the unique id in the source system, as well as the work item type to create. Then it contains a list of MigrationItemVersion that will represent the content of the data in the original system during the time. In this proof of concept I support only Title description and the date of the modification. This structure is needed because I want to migrate full history from the original system, not only the snapshot of latest version, so I need to know how the original data was during the time.

public class MigrationItem
{
    public MigrationItem()
    {
        _versions = new List<MigrationItemVersion>();
    }

    /// <summary>
    /// This is the original Id of original system to keep track of what was already
    /// imported.
    /// </summary>
    public String OriginalId { get; set; }

    /// <summary>
    /// You need to specify type of workitem to be used during import.
    /// </summary>
    public String WorkItemDestinationType { get; set; }

    private readonly List<MigrationItemVersion> _versions;

To make this code run you need to do a couple of thing, first of all you need to create a customized process from one of the three base process and add at least one text field to store the original id.

image

Figure 1: Adding a custom field to store the original id of  imported work item

This field is really important, because will help the user to correlate Imported Work Item to data in the original system, thus allowing import tool to identify created work Item to fix or re-import some of the Work Items.

Whenever you import or move data between systems, at least you need to have a unique identifier of data in the original system to be stored into the imported Work Item.

image

Figure 2: Custom original id on a imported Work Item.

As you can see in Figure 2 this information is readonly, because it is used only by the importer and should never be changed by a Human, you can obtain this with a simple Work Item rule.

image

Figure 3: Make OriginalId field Readonly

This will allow me to avoid users to mess and change this number. You can appreciate how easy it is to modify the template and create new fields and rules on Azure DevOps.

Thanks to process inheritance, it is really simple to modify the process in Azure DevOps adding information specific to own process, like the id of original item in case of an import.

Now that everything is in place you need to add the user that is performing the import as member of Project Collection Service Account, this specific group will allow the user to perform action on behalf of other. This is a very special permission, but you should use it if you want to perform a migration with high fidelity.

Once migration is finished, you can remove the user from the Project Collection Service Account and restore it to standard permission. Remember, since the sample project uses Access token to authenticate, be sure that the user that generated the token is a member of Project Collection Service Account group before running the test migration tool.

Being part of Project Collection Service Account allows a user to impersonate others as well as bypass some of the validation rules for the Work Item. This is needed to perform a change to Work Item in the past and on behalf of other user.

Now it came the code part, that, thanks to Azure Devops API it is really simple. I do not descend into details of connection and interaction, because it was something I already discussed in another post, this time I’m interested in how I can create a new Work Item and save multiple versions of it to recreate the history.

public async Task<Boolean> ImportWorkItemAsync(MigrationItem itemToMigrate)
{
    var existingWorkItem = GetWorkItem(itemToMigrate.OriginalId);
    if (existingWorkItem != null)
    {
        Log.Information("A workitem with originalId {originalId} already exists, it will be deleted", itemToMigrate.OriginalId);
        connection.WorkItemStore.DestroyWorkItems(new[] { existingWorkItem.Id });
    }

The core method is the ImportWorkItemAsync, that takes a MigrationItem to create the new work item. In the very first line I simply look for a work item that was already bound to that external item, if it is present I simply destroy it. This approach is radical, but it allows me to issue multiple test import runs without the hassle to deleting everything before each import. More important, if some of the Work Item where imported not correctly, I can simply re-export them and the corresponding Work Item will be recreated correctly.

private WorkItem GetWorkItem(String originalId)
{
    var existingWorkItems = connection
        .WorkItemStore
        .Query($@"select * from  workitems where {fieldWithOriginalId} = '" + originalId + "'");
    return existingWorkItems.OfType<WorkItem>().FirstOrDefault();
}

The nice stuff about customization, is that I can query Work Item Store using condition on my new defined field. To leave everything flexible, I can specify the name of the field created in Figure 1 to command line. The whole command line to the example looks like this.

--address https://gianmariaricci.visualstudio.com 
--tokenfile C:\develop\Crypted\patOri.txt 
--teamproject TestMigration 
--originalIdField custom_originalId

Once we are sure that the system does not contains a Work Item related to that external id,the code will create a new Work Item In memory

private WorkItem CreateWorkItem(MigrationItem migrationItem)
{
    WorkItemType type = null;
    try
    {
        type = teamProject.WorkItemTypes[migrationItem.WorkItemDestinationType];
    }
    catch (WorkItemTypeDeniedOrNotExistException) { }//ignore the exception will be logged  

    if (type == null)
    {
        Log.Error("Unable to find work item type {WorkItemDestinationType}", migrationItem.WorkItemDestinationType);
        return null;
    }

    WorkItem workItem = new WorkItem(type);
    Log.Information("Created Work Item for type {workItemType} related to original id {originalId}", workItem.Type.Name, migrationItem.OriginalId);

    //now start creating basic value that we need, like the original id 
    workItem[fieldWithOriginalId] = migrationItem.OriginalId;
    return workItem;
}

The type of the Work Item to create is part of MigrationItem information, and the code simply verify that such type of Work Item really exists in current team project. If everything is ok, the code create a new WorkItem in memory using that type, then populate the original id field. This structure allows me to query the original system, then for each Migration Item I can decide destination type in Azure Devops.

Last step is iterate through all the versions of the original item, and save every change to Work Item store to recreate the history of the original MigrationItem.

//now that we have work item, we need to start creating all the versions
for (int i = 0; i < itemToMigrate.Versions.Count(); i++)
{
    var version = itemToMigrate.GetVersionAt(i);
    workItem.Fields&#91;"System.ChangedDate"&#93;.Value = version.VersionTimestamp;
    workItem.Fields&#91;"System.ChangedBy"&#93;.Value = version.AuthorEmail;
    if (i == 0)
    {
        workItem.Fields&#91;"System.CreatedBy"&#93;.Value = version.AuthorEmail;
        workItem.Fields&#91;"System.CreatedDate"&#93;.Value = version.VersionTimestamp;
    }
    workItem.Title = version.Title;
    workItem.Description = version.Description;
    var validation = workItem.Validate();
    if (validation.Count > 0)
    {
        Log.Error("N°{errCount} validation errors for work Item {workItemId} originalId {originalId}", validation.Count, workItem.Id, itemToMigrate.OriginalId);
        foreach (Field error in validation)
        {
            Log.Error("Version {version}: We have validation error for work Item {workItemId} originalId {originalId} - Field: {name} ErrorStatus {errorStatus} Value {value}", i, workItem.Id, itemToMigrate.OriginalId, error.Name, error.Status, error.Value);
        }
        return false;
    }
    workItem.Save();
    if (i == 0)
    {
        Log.Information("Saved for the first time Work Item for type {workItemType} with id {workItemId} related to original id {originalId}", workItem.Type.Name, workItem.Id, itemToMigrate.OriginalId);
    }
    else
    {
        Log.Debug("Saved iteration {i} for original id {originalId}", i, itemToMigrate.OriginalId);
    }
}

return true;

The above code migrate only the Title, but nevertheless allows me to verify that I’m able to import not only a snapshot of Work Item, but a full history. As you can see this is really simple, for each iteration I only need to populate System.ChangedDate and System.ChangedBy, and in the first iteration I can also set System.CreatedBy.

One rule is in place, you cannot save with a date that is not greater that the date used for last save, this will force you to import all the versions in the correct order. Except this you can simply save with a date in the past and as a different user.

Before saving Work Item I simply issue a call to the Validate() method, that is able to determine any validation error before saving the Work Item, in case of error, I logged it then return false to inform the caller that Work Item was not fully imported.

This is a simple test to export some bogus data.

MigrationItem mi = new MigrationItem();
mi.OriginalId = "AA123";
mi.WorkItemDestinationType = "Product Backlog Item";
mi.AddVersion(new MigrationItemVersion()
{
    AuthorEmail = "alkampfer@outlook.com",
    Description = "Description",
    Title = "Title test",
    VersionTimestamp = new DateTime(2010, 01, 23, 22, 10, 32),
});

mi.AddVersion(new MigrationItemVersion()
{
    AuthorEmail = "alkampfer@outlook.com",
    Description = "Description",
    Title = "Title Modified",
    VersionTimestamp = new DateTime(2011, 01, 23, 22, 10, 32),
});

mi.AddVersion(new MigrationItemVersion()
{
    AuthorEmail = "alkampfer@outlook.com",
    Description = "Description",
    Title = "Title Modified Again",
    VersionTimestamp = new DateTime(2011, 01, 23, 22, 10, 32),
});

var importResult = importer.ImportWorkItemAsync(mi).Result;

Launching test program I view this output:

image

Figure 4: Export output.

This is not the first run, so the logs informed me that an item bound to that original Id was already created and it was deleted, then informed me that a new Work Item is created and then updated for two times.

image

Figure 2: Work Item was correctly created in Azure Devops Collection

The nice stuff is that the work item History reports all the modification with the correct Date, and this can be done only because the user that generated the token is member of Collection Service Account group. (Do not forget to remove the user after finishing the import, or use a specific user for the import).

Thanks to the great flexibility of Azure DevOps API you can easily imports data with full history to have a migration with higher fidelity than exporting last snapshot of the data.

The only indication that the action was performed by an import is that the history shoved (Via Gian MAria Ricci –aka Alkampfer) that indicates that the action was not really performed by Alkampfer the Zoalord, but was instead done with impersonation. Nevertheless you can maintain full history in the code.

You can find full example In GitHub

Happy Azure DevOps.

Gian Maria.

End of life of PHP 5.6, please upgrade to 7 version

Php 5.6 reached end of life support and this means that it will not receive anymore any security update. If you, like me, run a site with WordPress or any other technology based on PHP you should consider moving to PHP 7 as soon as possible. This is needed to avoid having a bad surprise if someone discover a new security bug and he will use to own your site.

In this article you can find some interesting information and you can verify that moving to PHP 7 will probably made your site to go faster.

Since a compromised site can be used to do phishing or to host malicious scripts, it is a good habit keeping up upgrading your WordPress site and PHP version used by your blog.

Gian Maria.

Using vmWare machine when you have Hyper-V

There are lots of VM containing Demo, Labs etc around the internet and surely Hyper-V is not the primary target as virtualization system. This because it is present on desktop OS only from Windows 8, it is not free (present in windows professional) and bound to windows. If you have to create a VM to share in internet, 99% of the time you want to target vmWare or Virtual Box and a linux guest system (no license needed). Since Virtual Box can run vmWare machine with little problem, vmWare is de-facto the standard in this area.

Virtual Machines with demo, labs etc that you find in the internet are 99% targeted to vmWare platform.

In the past I’ve struggled a lot with conversion tools that can convert vmWare disk formats to Hyper-V format, but sometimes this does not work because virtualized hardware is really different from the two systems.

If you really want to be productive, the only solution I’ve found is installing an ESXi server on an old machine, an approach that gives me lots of satisfaction. First of all you can use the Standalone conversion tool of vmware to convert a vmWare VM to OVF standard format in few minutes, then upload the image to your ESXi server and you are ready to go.

image

Figure 1: A simple command line instruction convert VM into OVF format

image

Figure 2: From the esxi interface you can choose to create a new VM from OVF file

Once you choose the ofv file and the disk file you just need to specify some basic characteristics for the VM and then you can simply let the browser do the rest, your machine will be created into your ESXi node.

image

Figure 3: Your VM will be created directly from your browser.

The second advantage of esxi is that it is a real mature and powerful virtualization system available for Freee. The only drawback is that it needs a serious Network Card, it will not work with a crappy card integrated into a consumer Motherboard. For my ESXi test instance I’ve used my old i7-2600K with a standard P8P67 Asus motherboard (overclocked) and then I’ve spent a few bucks (50€ approx) to buy a used network card 4xGigabit. This gives me four independent NICs, with a decent network chip, each one running at 1Gbit. Used card are really cheap, especially because there are no driver for latest operating system so they are thrown away on eBay for few bucks. When you are using a Virtual Machine to test something that involves networks, you will thanks ESXi and decent multiple NIC card because you can create real network topology, like having 3 machines each one using a different NIC and potentially connected to different router / switch to test a real production scenario.

ESXi NIC virtualization is FAR more powerful than Virtual Box or even vmWare Workstation when installed with a real powerful NIC. Combined with multiple NIC card you have the ability to simulate real network topologies.

If you are using Linux machine, vmWare environment has another great advantage over Hyper-V, it supports all resolutions, you are not limited to Full-Hd with manual editing of grub configuration, you can change your resolution from Linux control panel or directly enable live resizing with the Remote Console available in ESXi.

If you really want to create a test lab, especially if you want to do security testing, having one or more ESXi hosts is something that pays a lot in the long distance.

Gian Maria

Esxi, Hyper-V and Linux

I mainly use Hyper-V to virtualize my test environments and I’m really happy with it, the only problem is virtualizing Linux Desktop environments, especially if you have monitors with higher resolution than Full-HD (since in Hyper-V I’ve not found a way to run with a greater resolution than Full HD).

To overcome this limitation, I’ve converted my old workstation in a virtualization host running VMWare ESXi and I’m really satisfied. Here is a couple of tricks that I’ve learned (I’m a completely new to latest version of ESXi).

ESXi is free and it is a really powerful virtualization system, if you have hardware to spare, I strongly suggest you to have a ESXi instance to being able to run both Hyper-V and VmWare based virtual machines

First of all, you need to buy a new network adapter, ESXi is really picky about your network card and it refuses to install if you only have the crappy integrated Ethernet card. I’ve taken a 4x1GB old Intel card used on ebay. If you look you can find old board that are perfect to run with ESXi at a really cheap price. Once you have a good Ethernet adapter you are ready to go. Here are my network Physical NICs

image

Figure 1: Nic adapter on my system.

I strongly suggests you to read the Compatibility Guide https://www.vmware.com/resources/compatibility/search.php, from my experience, Intel cards are the most compatible one, even if they are really old. This card of mine does not work in windows 2012 or superior edition (it is really really old), but it works like a charm in ESXi 6.5, it has 4 physical NIC and it costs me around 40€)

Another thing I’ve learned is not to use the web interface to access Linux Machines, since I’m in Italy I have an Italian Keyboard Layout, and I had lots of problem with key mapping for Linux machines when I access them with standard web interface. The  problem happens because, when you started your VM, it is super normal to click the preview to open a web interface to interact with the machine

image

Figure 2: Click on the preview, and you will access the machine with a web interface

If you instead click on the Console menu, you can download a stand alone remote console tools (available for all operating systems) that allows you to connect to your virtual machines and avoid having keyboard problem.

Latest version of ESXi can be entirely managed by Web Interface, but to interact with Virtual Machines, the best solution is to use the VMRC standalone tool.

image

Figure 3: Download VMRC standalone software to connect to your machines

Once you downloaded and installed the VRMC tool you can simply use the “Launch Remote Console” menu option and you will be connected to your machine with a really nice standalone console that will solve all of your keyboard problem.

Gian Maria.

Dotnetcore, CI, Linux and VSTS

If you have a dotnetcore project, it is a good idea to setup continuous integration on a Linux machine. This will guarantee that the solution actually compiles correctly and all the tests run perfectly, even in  Linux environment. If you are 100% sure that, if a dotnetcore project runs fine under Windows it should run fine under Linux, you will have some interesting surprises. The first and trivial difference is that Linux filesystem is case sensitive.

If you use dotnetcore, it is  always a good idea to immediately setup a Build against a Linux environment to ensure portability.

I’m starting creating a dedicated pool for Linux machines. Actually having a dedicated pool is not necessary, because the build can require Linux capability, but I’d like to start having all the Linux build agent in a single place for easier management.

image

Figure 1: Create a pool dedicated to build agents running Linux operating system

Pressing the button “download agent” you are prompted with a nice UI that explain in a really easy way how you should deploy your agent under your linux machine.

image

Figure 2: You can easily download the agent from VSTS / TFS web interface

Instruction are detailed, and it is really easy to start your agent in this way: just running a configure shell script and then you can run the agent with another run.sh shell script.

There is also another interesting approach, you can give a shot to the official docker image that you can find here: https://github.com/Microsoft/vsts-agent-docker. The only thing I need to do is running the docker image with the command.

sudo docker run   -e VSTS_ACCOUNT=prxm -d -e VSTS_TOKEN=my_PAT_TOKEN-e VSTS_AGENT='schismatrix' -e VSTS_POOL='Linux' -it microsoft/vsts-agent

Please be patient on the first run because the download can take a little bit, the docker image is pretty big, so you need to patiently wait for the download to finish. Once the docker image is running, you should verify with sudo docker ps that the image is really running fine and you should check on the Agent Pool page if the agent is really connected. The drawback of this approach is that currently only Ubuntu is supported with Docker, but the situation will surely change in the future.

Docker is surely the most simple way to run a VSTS / TFS linux build agent.

Another things to pay attention is running the image with the –d option, because whenever you create a new instance of vsts agent from the docker base image, the image will download the latest agent and this imply that you need to wait a decent amount of time before the agent is up and running, especially if you, like me, are on a standard ADSL connection with max download speed of 5 Mbps.

image

Figure 3: Without the –d option, the image will run interactively and you need to wait for the agent to be downloaded

As you can see from the image, running a new docker instance starts from the base docker image, contacts the VSTS server and download and install the latest version of the agent.

image

Figure 4: After the agent is downloaded the image automatically configure and run the agent and you are up and running.

Only when the output of the docker image states Listening for Jobs the agent should be online and usable.

image

Figure 5: Agent is alive and kicking

Another interesting behavior is that, when you press CTRL+C to stop the interactive container instance, the docker image remove the agent from the server, avoiding the risk to left orphan registration in your VSTS Server.

image

Figure 6: When you stop docker image, the agent is de-registered to avoid orphan agents registration.

Please remember that, whenever you stop the container with CTRL+C, the container will stop, and when you will restart it, it will download again the VSTS agent.

This happens because, whenever the container stop and run again, it need to redownload everything that is not included in the state of the container itself. This is usually not a big problem, and I need to admit that this little nuance is overcome by the tremendous simplicity you have, just run a container and you have your agent up and running, with latest version of dotnetcore (2.0) and other goodness.

The real only drawback of this approach, is that you have little control on what is available on the image. As an example, if you need to have some special software installed in the build machine, probably you need to fork the container and configure the docker machine for your need.

Once everything is up and running (docker or manual run.sh) just fire a build and watch it to be executed in your linux machine.

image

Figure 7: Build with tests executed in Linux machine.

Gian Maria