How to configure Visual Studio as Diff and Merge tool for Git

After almost six years, the post on How to configure diff and merge tool in Visual Studio Git Tools is still read by people that found it useful, but it is now really really old and needs to be updated.

That post was written when Visual Studio 2012 was the latest version and the integration with Git was still really young, made with an external plugin made by Microsoft and with really basic support. If you use Visual Studio 2017 or greater, you can simply go to to Team Explorer and open settings of the repository.

image

figure 1: Git repository settings inside Visual Studio Team Explorer

Settings pane contains a specific section for Git, where you can configure settings for the current repository or Global settings, valid for all repository of current user.

image

Figure 2: Git settings inside Visual Studio

If you open Repository Settings usually you find that no specific diff and merge tool is set. Merge and Diff configurations are typical settings that are made at User level and not for each single repository.

image

Figure 3: Diff and Merge tool configuration inside Visual Studio.

As you can see, in Figure 3 no diff or merge tool was set for the current repository, this means that it will use the default one for the user (in my situation is none). If you use only Visual Studio this settings is not so useful, if you have a conflict during merge or rebase visual studio will automatically show conflicts and guide you during merging.

If you are inside Visual Studio it will handle diff and merge automatically, even if it is not configured as Diff or Merge Tool. The rationale behind this choice is: If you are inside a tool (like VS) that has full support for diff and merge, the tool will automatically present you with diff and merge capabilities without checking repo configuration.

This happens because when you open a Git Repository, Visual Studio monitors the status of the Repository and, if some operation has unresolved conflicts, it shows the situation to the user, without the need to do anything. The settings in Figure 3 is useful only if you are operating with some other tool or with command line, if you got a conflict during an operation started from any other tool (GUI or command line) the procedure is:
1) Opening VS
2) from VS Team Explorer localize local git repository and open it
3) go to team explorer changes pane to start resolving conflicts

If you configured instead VS as diff and tool you can simply issue a git mergetool command and everything is done automatically without any user intervention. But to be honest, latest VS git integration is really good and it is surely better to manually open local repository. As an example, if you are doing a rebase from commandline and you got conflicts, it is better to manually open VS, solve the conflict, then continue rebase operation inside VS. If you got further conflicts, you do not need to wait for VS to reopen with git mergetool command.

But if you really want to configure VS as Diff and Merge tool, if you press “Use Visual Studio” button (Figure 3) you can modify your local gitconfig. The net result is similar to what I suggested on my old post, VS just adds the six sections for diff and merge in config file.

image

Figure 4: Git diff and merge section as saved from Visual Studio 2019 preview

If Visual Studio is your tool of choice I simply suggest you to configure it globally (file is named %userprofile%\.gitconfig) so you can invoke Merge tool from everywhere and have Visual Studio to handle everything.

Gian Maria.

Unable to Sysprep Windows 10 due to Candy Crush …

I was trying to sysprep a Windows 10 virtual machine hosted in Hyper-V but I got error messages like

Package CandyCrush.. was installed for a user, but not provisioned …

It turns out that Win 10 standard installer installs some application from the store that conflicts with sysprep. Now I need to uninstall one by one and to speedup the process I suggest you to use Get-AppxPackage powershell commandlet.

As an example to uninstall every application called Candy you can issue

Get-AppxPackage –Allusers *Candy* | Remove-AppxPackage

After you uninstalled all unwanted application you should be able to sysprep your Widnows 10 machine.

Gian Maria.

Hosted Agents plus Docker, perfect match for Azure DevOps and Open source Project

If you want to build an OpenSource project with Azure DevOps, you can open a free account and you have 10 concurrent pipelines with free agents to build your project, yes, completely free. The only problem you have in this scenario is that, sometimes, you need some prerequisites installed on the build machine, like MongoDb and they are missing on hosted build.

Lets take as use case NStore, an open source library for Event Sourcing in C# that needs to run unit test against MongoDb and SqlServer, prerequisites that are not present in Linux Hosted Agents. Before giving up using Hosted Agents and start deploying private agents, you need to know that Docker is up and running in Hosted Agents and it can be used to have your missing prerequisites.

Thanks to docker you can simply have prerequisites for your build to run in an Hosted Environment

Having docker preinstalled on Hosted Buil Agent gives you a tremendous power, combined with Docker Task. If I want to run a build on Linux Hosted agent of NStore, here is a possible build that runs perfectly fine.

image

Figure 1: Simple build definition that starts a MongoDb and Sql Server instance with Docker before actually running th ebuild.

If you examine the very first task it is amazing how simple it is to start a MsSql instance running on your Linux box. At the end of Task execution you have a fully functional container running in Hosted Agent.

image

Figure 2: Running MsSql as a container in Linux

You just need to remember to redirect the port (-p 1433:1433) so that you can access SqlServer instance and the game is done.

Task number 2 uses the very same technique to run a MongoDB instance inside another docker container instance then Task 3 is a simple Docker Ps command, just to verify that the two container are running correctly. As you can see from Figure 3, it is quite useful to know if the container really started correctly.

image

Figure 3: Ps command allows for simple dump of all containers running in the machine

You can log every container output, in Task number 4 I’m just running a logs command for the MsSql container, just to verify, in case MsSql test are all failing, why the container did not started (like you forgot the ACCEPT_EULA, or you choose a password not enough complex.

image

Figure 4: Logging output of container to troubleshoot them.

Remember that if the container does not start correctly your build will have tons of failing tests, so you really need a way to quick understand if tests are really failing or your container instance simply did not start (and the reason why it failed)

All subsequent tasks are standard for a .NET Core project, just dotnet restore, build and test your solution, and upload test results in the build result, so you can have a nice result of all of your tests.

It is almost impossible to pretend that someone gives you a build agent with Everything you can possibly need, but if you give to the user Docker support, life is really easier.

Finally, to make everything flexible, you should grab connection strings for Tests from environment variables. NStore uses a couple of Environment Variable called NSTORE_MONGODB and NSTORE_MSSQL to specify connection strings used for test. I really want you to remember that all Variables of a build are copied to Environment Variables during the build.

image

Figure 5: Configuration of test connection strings are directly stored in Build Variables.

As you can see from Figure 5 I used a MongoDb without password (this is a instance in docker that will be destroyed after the build, so it is acceptable to run without a password) but you can usually configure Docker Instances with start parameters. In that example I gave SQL Server a strong password (it is required for the container to start).

Remember, if you have an open source project, you can build for free with Azure DevOps pipelines with minimum effort and before giving away using Hosted Agents, just verify if you can have what you miss with Docker.

Gian Maria.

Import Work Item from external system to Azure DevOps

In previous post I’ve dealt with exporting Work Item information in Word file with AzureDevOps API, now I want to deal with the inverse operation, importing data from external service into Azure DevOps.

If the source service is a Team Foundation Server, you can use the really good tool by Naked Agility Ltd you can find in marketplace, you can also have a shot at the official migration tool if you need to migrate an entire collection, but if you have some data to import from an external system, using API can be a viable solution.

I’ve created a simple project in Github to demonstrate basic usage of API to import data into Azure DevOps (both server and online version), where I’m dealing only with the Azure DevOps parts, leaving to the user the burden to implement code to extract data from source system.

If you need to import data to a System, it is better not to assume where the data is coming from. Having only to deal with the import part, you leave the options to other to do the work of getting data from the source server.

The code uses a MigrationItem class to store all the information we support to migrate data to Azure DevOps, this class contains only a string field to identify the unique id in the source system, as well as the work item type to create. Then it contains a list of MigrationItemVersion that will represent the content of the data in the original system during the time. In this proof of concept I support only Title description and the date of the modification. This structure is needed because I want to migrate full history from the original system, not only the snapshot of latest version, so I need to know how the original data was during the time.

public class MigrationItem
{
    public MigrationItem()
    {
        _versions = new List<MigrationItemVersion>();
    }

    /// <summary>
    /// This is the original Id of original system to keep track of what was already
    /// imported.
    /// </summary>
    public String OriginalId { get; set; }

    /// <summary>
    /// You need to specify type of workitem to be used during import.
    /// </summary>
    public String WorkItemDestinationType { get; set; }

    private readonly List<MigrationItemVersion> _versions;

To make this code run you need to do a couple of thing, first of all you need to create a customized process from one of the three base process and add at least one text field to store the original id.

image

Figure 1: Adding a custom field to store the original id of  imported work item

This field is really important, because will help the user to correlate Imported Work Item to data in the original system, thus allowing import tool to identify created work Item to fix or re-import some of the Work Items.

Whenever you import or move data between systems, at least you need to have a unique identifier of data in the original system to be stored into the imported Work Item.

image

Figure 2: Custom original id on a imported Work Item.

As you can see in Figure 2 this information is readonly, because it is used only by the importer and should never be changed by a Human, you can obtain this with a simple Work Item rule.

image

Figure 3: Make OriginalId field Readonly

This will allow me to avoid users to mess and change this number. You can appreciate how easy it is to modify the template and create new fields and rules on Azure DevOps.

Thanks to process inheritance, it is really simple to modify the process in Azure DevOps adding information specific to own process, like the id of original item in case of an import.

Now that everything is in place you need to add the user that is performing the import as member of Project Collection Service Account, this specific group will allow the user to perform action on behalf of other. This is a very special permission, but you should use it if you want to perform a migration with high fidelity.

Once migration is finished, you can remove the user from the Project Collection Service Account and restore it to standard permission. Remember, since the sample project uses Access token to authenticate, be sure that the user that generated the token is a member of Project Collection Service Account group before running the test migration tool.

Being part of Project Collection Service Account allows a user to impersonate others as well as bypass some of the validation rules for the Work Item. This is needed to perform a change to Work Item in the past and on behalf of other user.

Now it came the code part, that, thanks to Azure Devops API it is really simple. I do not descend into details of connection and interaction, because it was something I already discussed in another post, this time I’m interested in how I can create a new Work Item and save multiple versions of it to recreate the history.

public async Task<Boolean> ImportWorkItemAsync(MigrationItem itemToMigrate)
{
    var existingWorkItem = GetWorkItem(itemToMigrate.OriginalId);
    if (existingWorkItem != null)
    {
        Log.Information("A workitem with originalId {originalId} already exists, it will be deleted", itemToMigrate.OriginalId);
        connection.WorkItemStore.DestroyWorkItems(new[] { existingWorkItem.Id });
    }

The core method is the ImportWorkItemAsync, that takes a MigrationItem to create the new work item. In the very first line I simply look for a work item that was already bound to that external item, if it is present I simply destroy it. This approach is radical, but it allows me to issue multiple test import runs without the hassle to deleting everything before each import. More important, if some of the Work Item where imported not correctly, I can simply re-export them and the corresponding Work Item will be recreated correctly.

private WorkItem GetWorkItem(String originalId)
{
    var existingWorkItems = connection
        .WorkItemStore
        .Query($@"select * from  workitems where {fieldWithOriginalId} = '" + originalId + "'");
    return existingWorkItems.OfType<WorkItem>().FirstOrDefault();
}

The nice stuff about customization, is that I can query Work Item Store using condition on my new defined field. To leave everything flexible, I can specify the name of the field created in Figure 1 to command line. The whole command line to the example looks like this.

--address https://gianmariaricci.visualstudio.com 
--tokenfile C:\develop\Crypted\patOri.txt 
--teamproject TestMigration 
--originalIdField custom_originalId

Once we are sure that the system does not contains a Work Item related to that external id,the code will create a new Work Item In memory

private WorkItem CreateWorkItem(MigrationItem migrationItem)
{
    WorkItemType type = null;
    try
    {
        type = teamProject.WorkItemTypes[migrationItem.WorkItemDestinationType];
    }
    catch (WorkItemTypeDeniedOrNotExistException) { }//ignore the exception will be logged  

    if (type == null)
    {
        Log.Error("Unable to find work item type {WorkItemDestinationType}", migrationItem.WorkItemDestinationType);
        return null;
    }

    WorkItem workItem = new WorkItem(type);
    Log.Information("Created Work Item for type {workItemType} related to original id {originalId}", workItem.Type.Name, migrationItem.OriginalId);

    //now start creating basic value that we need, like the original id 
    workItem[fieldWithOriginalId] = migrationItem.OriginalId;
    return workItem;
}

The type of the Work Item to create is part of MigrationItem information, and the code simply verify that such type of Work Item really exists in current team project. If everything is ok, the code create a new WorkItem in memory using that type, then populate the original id field. This structure allows me to query the original system, then for each Migration Item I can decide destination type in Azure Devops.

Last step is iterate through all the versions of the original item, and save every change to Work Item store to recreate the history of the original MigrationItem.

//now that we have work item, we need to start creating all the versions
for (int i = 0; i < itemToMigrate.Versions.Count(); i++)
{
    var version = itemToMigrate.GetVersionAt(i);
    workItem.Fields&#91;"System.ChangedDate"&#93;.Value = version.VersionTimestamp;
    workItem.Fields&#91;"System.ChangedBy"&#93;.Value = version.AuthorEmail;
    if (i == 0)
    {
        workItem.Fields&#91;"System.CreatedBy"&#93;.Value = version.AuthorEmail;
        workItem.Fields&#91;"System.CreatedDate"&#93;.Value = version.VersionTimestamp;
    }
    workItem.Title = version.Title;
    workItem.Description = version.Description;
    var validation = workItem.Validate();
    if (validation.Count > 0)
    {
        Log.Error("N°{errCount} validation errors for work Item {workItemId} originalId {originalId}", validation.Count, workItem.Id, itemToMigrate.OriginalId);
        foreach (Field error in validation)
        {
            Log.Error("Version {version}: We have validation error for work Item {workItemId} originalId {originalId} - Field: {name} ErrorStatus {errorStatus} Value {value}", i, workItem.Id, itemToMigrate.OriginalId, error.Name, error.Status, error.Value);
        }
        return false;
    }
    workItem.Save();
    if (i == 0)
    {
        Log.Information("Saved for the first time Work Item for type {workItemType} with id {workItemId} related to original id {originalId}", workItem.Type.Name, workItem.Id, itemToMigrate.OriginalId);
    }
    else
    {
        Log.Debug("Saved iteration {i} for original id {originalId}", i, itemToMigrate.OriginalId);
    }
}

return true;

The above code migrate only the Title, but nevertheless allows me to verify that I’m able to import not only a snapshot of Work Item, but a full history. As you can see this is really simple, for each iteration I only need to populate System.ChangedDate and System.ChangedBy, and in the first iteration I can also set System.CreatedBy.

One rule is in place, you cannot save with a date that is not greater that the date used for last save, this will force you to import all the versions in the correct order. Except this you can simply save with a date in the past and as a different user.

Before saving Work Item I simply issue a call to the Validate() method, that is able to determine any validation error before saving the Work Item, in case of error, I logged it then return false to inform the caller that Work Item was not fully imported.

This is a simple test to export some bogus data.

MigrationItem mi = new MigrationItem();
mi.OriginalId = "AA123";
mi.WorkItemDestinationType = "Product Backlog Item";
mi.AddVersion(new MigrationItemVersion()
{
    AuthorEmail = "alkampfer@outlook.com",
    Description = "Description",
    Title = "Title test",
    VersionTimestamp = new DateTime(2010, 01, 23, 22, 10, 32),
});

mi.AddVersion(new MigrationItemVersion()
{
    AuthorEmail = "alkampfer@outlook.com",
    Description = "Description",
    Title = "Title Modified",
    VersionTimestamp = new DateTime(2011, 01, 23, 22, 10, 32),
});

mi.AddVersion(new MigrationItemVersion()
{
    AuthorEmail = "alkampfer@outlook.com",
    Description = "Description",
    Title = "Title Modified Again",
    VersionTimestamp = new DateTime(2011, 01, 23, 22, 10, 32),
});

var importResult = importer.ImportWorkItemAsync(mi).Result;

Launching test program I view this output:

image

Figure 4: Export output.

This is not the first run, so the logs informed me that an item bound to that original Id was already created and it was deleted, then informed me that a new Work Item is created and then updated for two times.

image

Figure 2: Work Item was correctly created in Azure Devops Collection

The nice stuff is that the work item History reports all the modification with the correct Date, and this can be done only because the user that generated the token is member of Collection Service Account group. (Do not forget to remove the user after finishing the import, or use a specific user for the import).

Thanks to the great flexibility of Azure DevOps API you can easily imports data with full history to have a migration with higher fidelity than exporting last snapshot of the data.

The only indication that the action was performed by an import is that the history shoved (Via Gian MAria Ricci –aka Alkampfer) that indicates that the action was not really performed by Alkampfer the Zoalord, but was instead done with impersonation. Nevertheless you can maintain full history in the code.

You can find full example In GitHub

Happy Azure DevOps.

Gian Maria.

End of life of PHP 5.6, please upgrade to 7 version

Php 5.6 reached end of life support and this means that it will not receive anymore any security update. If you, like me, run a site with WordPress or any other technology based on PHP you should consider moving to PHP 7 as soon as possible. This is needed to avoid having a bad surprise if someone discover a new security bug and he will use to own your site.

In this article you can find some interesting information and you can verify that moving to PHP 7 will probably made your site to go faster.

Since a compromised site can be used to do phishing or to host malicious scripts, it is a good habit keeping up upgrading your WordPress site and PHP version used by your blog.

Gian Maria.