Import Work Item from external system to Azure DevOps

In previous post I’ve dealt with exporting Work Item information in Word file with AzureDevOps API, now I want to deal with the inverse operation, importing data from external service into Azure DevOps.

If the source service is a Team Foundation Server, you can use the really good tool by Naked Agility Ltd you can find in marketplace, you can also have a shot at the official migration tool if you need to migrate an entire collection, but if you have some data to import from an external system, using API can be a viable solution.

I’ve created a simple project in Github to demonstrate basic usage of API to import data into Azure DevOps (both server and online version), where I’m dealing only with the Azure DevOps parts, leaving to the user the burden to implement code to extract data from source system.

If you need to import data to a System, it is better not to assume where the data is coming from. Having only to deal with the import part, you leave the options to other to do the work of getting data from the source server.

The code uses a MigrationItem class to store all the information we support to migrate data to Azure DevOps, this class contains only a string field to identify the unique id in the source system, as well as the work item type to create. Then it contains a list of MigrationItemVersion that will represent the content of the data in the original system during the time. In this proof of concept I support only Title description and the date of the modification. This structure is needed because I want to migrate full history from the original system, not only the snapshot of latest version, so I need to know how the original data was during the time.

public class MigrationItem
{
    public MigrationItem()
    {
        _versions = new List<MigrationItemVersion>();
    }

    /// <summary>
    /// This is the original Id of original system to keep track of what was already
    /// imported.
    /// </summary>
    public String OriginalId { get; set; }

    /// <summary>
    /// You need to specify type of workitem to be used during import.
    /// </summary>
    public String WorkItemDestinationType { get; set; }

    private readonly List<MigrationItemVersion> _versions;

To make this code run you need to do a couple of thing, first of all you need to create a customized process from one of the three base process and add at least one text field to store the original id.

image

Figure 1: Adding a custom field to store the original id of  imported work item

This field is really important, because will help the user to correlate Imported Work Item to data in the original system, thus allowing import tool to identify created work Item to fix or re-import some of the Work Items.

Whenever you import or move data between systems, at least you need to have a unique identifier of data in the original system to be stored into the imported Work Item.

image

Figure 2: Custom original id on a imported Work Item.

As you can see in Figure 2 this information is readonly, because it is used only by the importer and should never be changed by a Human, you can obtain this with a simple Work Item rule.

image

Figure 3: Make OriginalId field Readonly

This will allow me to avoid users to mess and change this number. You can appreciate how easy it is to modify the template and create new fields and rules on Azure DevOps.

Thanks to process inheritance, it is really simple to modify the process in Azure DevOps adding information specific to own process, like the id of original item in case of an import.

Now that everything is in place you need to add the user that is performing the import as member of Project Collection Service Account, this specific group will allow the user to perform action on behalf of other. This is a very special permission, but you should use it if you want to perform a migration with high fidelity.

Once migration is finished, you can remove the user from the Project Collection Service Account and restore it to standard permission. Remember, since the sample project uses Access token to authenticate, be sure that the user that generated the token is a member of Project Collection Service Account group before running the test migration tool.

Being part of Project Collection Service Account allows a user to impersonate others as well as bypass some of the validation rules for the Work Item. This is needed to perform a change to Work Item in the past and on behalf of other user.

Now it came the code part, that, thanks to Azure Devops API it is really simple. I do not descend into details of connection and interaction, because it was something I already discussed in another post, this time I’m interested in how I can create a new Work Item and save multiple versions of it to recreate the history.

public async Task<Boolean> ImportWorkItemAsync(MigrationItem itemToMigrate)
{
    var existingWorkItem = GetWorkItem(itemToMigrate.OriginalId);
    if (existingWorkItem != null)
    {
        Log.Information("A workitem with originalId {originalId} already exists, it will be deleted", itemToMigrate.OriginalId);
        connection.WorkItemStore.DestroyWorkItems(new[] { existingWorkItem.Id });
    }

The core method is the ImportWorkItemAsync, that takes a MigrationItem to create the new work item. In the very first line I simply look for a work item that was already bound to that external item, if it is present I simply destroy it. This approach is radical, but it allows me to issue multiple test import runs without the hassle to deleting everything before each import. More important, if some of the Work Item where imported not correctly, I can simply re-export them and the corresponding Work Item will be recreated correctly.

private WorkItem GetWorkItem(String originalId)
{
    var existingWorkItems = connection
        .WorkItemStore
        .Query($@"select * from  workitems where {fieldWithOriginalId} = '" + originalId + "'");
    return existingWorkItems.OfType<WorkItem>().FirstOrDefault();
}

The nice stuff about customization, is that I can query Work Item Store using condition on my new defined field. To leave everything flexible, I can specify the name of the field created in Figure 1 to command line. The whole command line to the example looks like this.

--address https://gianmariaricci.visualstudio.com 
--tokenfile C:\develop\Crypted\patOri.txt 
--teamproject TestMigration 
--originalIdField custom_originalId

Once we are sure that the system does not contains a Work Item related to that external id,the code will create a new Work Item In memory

private WorkItem CreateWorkItem(MigrationItem migrationItem)
{
    WorkItemType type = null;
    try
    {
        type = teamProject.WorkItemTypes[migrationItem.WorkItemDestinationType];
    }
    catch (WorkItemTypeDeniedOrNotExistException) { }//ignore the exception will be logged  

    if (type == null)
    {
        Log.Error("Unable to find work item type {WorkItemDestinationType}", migrationItem.WorkItemDestinationType);
        return null;
    }

    WorkItem workItem = new WorkItem(type);
    Log.Information("Created Work Item for type {workItemType} related to original id {originalId}", workItem.Type.Name, migrationItem.OriginalId);

    //now start creating basic value that we need, like the original id 
    workItem[fieldWithOriginalId] = migrationItem.OriginalId;
    return workItem;
}

The type of the Work Item to create is part of MigrationItem information, and the code simply verify that such type of Work Item really exists in current team project. If everything is ok, the code create a new WorkItem in memory using that type, then populate the original id field. This structure allows me to query the original system, then for each Migration Item I can decide destination type in Azure Devops.

Last step is iterate through all the versions of the original item, and save every change to Work Item store to recreate the history of the original MigrationItem.

//now that we have work item, we need to start creating all the versions
for (int i = 0; i < itemToMigrate.Versions.Count(); i++)
{
    var version = itemToMigrate.GetVersionAt(i);
    workItem.Fields&#91;"System.ChangedDate"&#93;.Value = version.VersionTimestamp;
    workItem.Fields&#91;"System.ChangedBy"&#93;.Value = version.AuthorEmail;
    if (i == 0)
    {
        workItem.Fields&#91;"System.CreatedBy"&#93;.Value = version.AuthorEmail;
        workItem.Fields&#91;"System.CreatedDate"&#93;.Value = version.VersionTimestamp;
    }
    workItem.Title = version.Title;
    workItem.Description = version.Description;
    var validation = workItem.Validate();
    if (validation.Count > 0)
    {
        Log.Error("N°{errCount} validation errors for work Item {workItemId} originalId {originalId}", validation.Count, workItem.Id, itemToMigrate.OriginalId);
        foreach (Field error in validation)
        {
            Log.Error("Version {version}: We have validation error for work Item {workItemId} originalId {originalId} - Field: {name} ErrorStatus {errorStatus} Value {value}", i, workItem.Id, itemToMigrate.OriginalId, error.Name, error.Status, error.Value);
        }
        return false;
    }
    workItem.Save();
    if (i == 0)
    {
        Log.Information("Saved for the first time Work Item for type {workItemType} with id {workItemId} related to original id {originalId}", workItem.Type.Name, workItem.Id, itemToMigrate.OriginalId);
    }
    else
    {
        Log.Debug("Saved iteration {i} for original id {originalId}", i, itemToMigrate.OriginalId);
    }
}

return true;

The above code migrate only the Title, but nevertheless allows me to verify that I’m able to import not only a snapshot of Work Item, but a full history. As you can see this is really simple, for each iteration I only need to populate System.ChangedDate and System.ChangedBy, and in the first iteration I can also set System.CreatedBy.

One rule is in place, you cannot save with a date that is not greater that the date used for last save, this will force you to import all the versions in the correct order. Except this you can simply save with a date in the past and as a different user.

Before saving Work Item I simply issue a call to the Validate() method, that is able to determine any validation error before saving the Work Item, in case of error, I logged it then return false to inform the caller that Work Item was not fully imported.

This is a simple test to export some bogus data.

MigrationItem mi = new MigrationItem();
mi.OriginalId = "AA123";
mi.WorkItemDestinationType = "Product Backlog Item";
mi.AddVersion(new MigrationItemVersion()
{
    AuthorEmail = "alkampfer@outlook.com",
    Description = "Description",
    Title = "Title test",
    VersionTimestamp = new DateTime(2010, 01, 23, 22, 10, 32),
});

mi.AddVersion(new MigrationItemVersion()
{
    AuthorEmail = "alkampfer@outlook.com",
    Description = "Description",
    Title = "Title Modified",
    VersionTimestamp = new DateTime(2011, 01, 23, 22, 10, 32),
});

mi.AddVersion(new MigrationItemVersion()
{
    AuthorEmail = "alkampfer@outlook.com",
    Description = "Description",
    Title = "Title Modified Again",
    VersionTimestamp = new DateTime(2011, 01, 23, 22, 10, 32),
});

var importResult = importer.ImportWorkItemAsync(mi).Result;

Launching test program I view this output:

image

Figure 4: Export output.

This is not the first run, so the logs informed me that an item bound to that original Id was already created and it was deleted, then informed me that a new Work Item is created and then updated for two times.

image

Figure 2: Work Item was correctly created in Azure Devops Collection

The nice stuff is that the work item History reports all the modification with the correct Date, and this can be done only because the user that generated the token is member of Collection Service Account group. (Do not forget to remove the user after finishing the import, or use a specific user for the import).

Thanks to the great flexibility of Azure DevOps API you can easily imports data with full history to have a migration with higher fidelity than exporting last snapshot of the data.

The only indication that the action was performed by an import is that the history shoved (Via Gian MAria Ricci –aka Alkampfer) that indicates that the action was not really performed by Alkampfer the Zoalord, but was instead done with impersonation. Nevertheless you can maintain full history in the code.

You can find full example In GitHub

Happy Azure DevOps.

Gian Maria.

Please follow and like us:

Sonar Analysis of Python with Azure DevOps pipeline

Once you have test and Code Coverage for your build of Python code, last step for a good build is adding support for Code Analysis with Sonar/SonarCloud. SonarCloud is the best option if your code is open source, because it is free and you should not install anything except the free addin in Azure Devops Marketplace.

From original build you need only to add two steps: PrepareAnalysis onSonarCloud and Run SonarCloud analysis, in the same way you do analysis for a .NET project.

image

Figure 1: Python build in Azure DevOps

You do not need to configure anything for a standard analysis with default options, just follow the configuration in Figure 2.:

image

Figure 2: Configuration of Sonar Cloud analysis

The only tricks I had to do is deleting the folder /htmlcov created by pytest for code coverage results. Once the coverage result was uploaded to Azure Devops server I do not needs it anymore and I want to remove it from sonar analysis. Remember that if you do not configure anything special for Sonar Cloud configuration it will analyze everything in the code folder, so you will end up with errors like these:

image

Figure 3: Failed Sonar Cloud analysis caused by output of code coverage.

You can clearly do a better job simply configuring Sonar Cloud Analysis to skip those folder, but in this situation a simple Delete folder task does the job.

To avoid cluttering SonarCloud analysis with unneeded files, you need to delete any files that were generated in the directory and that you do not want to analyze, like code coverage reports.

Another important settings is the Advances section, because you should specify the file containing code coverage result as extended sonar property.

image

Figure 4: Extra property to specify location of coverage file in the build.

Now you can run the build and verify that the analysis was indeed sent to SonarCloud.

image

Figure 5: After the build I can analyze code smells directly in sonar cloud.

If you prefer, like me, YAML builds, here is the complete YAML build definition that you can adapt to your repository.

queue:
  name: Hosted Ubuntu 1604

trigger:
- master
- develop
- features/*
- hotfix/*
- release/*

steps:

- task: UsePythonVersion@0
  displayName: 'Use Python 3.x'

- bash: |
   pip install pytest 
   pip install pytest-cov 
   pip install pytest-xdist 
   pip install pytest-bdd 
  displayName: 'Install a bunch of pip packages.'

- task: SonarSource.sonarcloud.14d9cde6-c1da-4d55-aa01-2965cd301255.SonarCloudPrepare@1
  displayName: 'Prepare analysis on SonarCloud'
  inputs:
    SonarCloud: SonarCloud
    organization: 'alkampfergit-github'
    scannerMode: CLI
    configMode: manual
    cliProjectKey: Pytest
    cliProjectName: Pytest
    extraProperties: |
     # Additional properties that will be passed to the scanner, 
     # Put one key=value per line, example:
     # sonar.exclusions=**/*.bin
     sonar.python.coverage.reportPath=$(System.DefaultWorkingDirectory)/coverage.xml

- bash: 'pytest --junitxml=$(Build.StagingDirectory)/test.xml --cov --cov-report=xml --cov-report=html' 
  workingDirectory: '.'
  displayName: 'Run tests with code coverage'
  continueOnError: true

- task: PublishTestResults@2
  displayName: 'Publish test result /test.xml'
  inputs:
    testResultsFiles: '$(Build.StagingDirectory)/test.xml'
    testRunTitle: 010

- task: PublishCodeCoverageResults@1
  displayName: 'Publish code coverage'
  inputs:
    codeCoverageTool: Cobertura
    summaryFileLocation: '$(System.DefaultWorkingDirectory)/coverage.xml'
    reportDirectory: '$(System.DefaultWorkingDirectory)/htmlcov'
    additionalCodeCoverageFiles: '$(System.DefaultWorkingDirectory)/**'

- task: DeleteFiles@1
  displayName: 'Delete files from $(System.DefaultWorkingDirectory)/htmlcov'
  inputs:
    SourceFolder: '$(System.DefaultWorkingDirectory)/htmlcov'
    Contents: '**'

- task: SonarSource.sonarcloud.ce096e50-6155-4de8-8800-4221aaeed4a1.SonarCloudAnalyze@1
  displayName: 'Run Sonarcloud Analysis'

The only settings you need to adapt is the name of the SonarCloud connection (in this example is called SonarCloud) you can add/change in Project Settings > Service Connections.

image

Figure 6: Service connection settings where you can add/change connection with Sonar Cloud Servers.

A possible final step is adding the Build Breaker extension to your account that allows you to made your build fails whenever the Quality Gate of SonarCloud is failed.

Thanks to Azure DevOps build system, creating a build that perform tests and analyze your Python code is extremely simple.

Happy Azure Devops.

Gian Maria

Please follow and like us:

Create Word document from Work Items

Post in the series:
1) API Connection
2) Retrieve Work Items Information
3) Azure DevOps API, Embed images into  HTML

Now we have all the prerequisites in place to connect to an Azure DevOps account, execute a query to grab all work items of a sprint and modifying HTML of Rich Edit fields to embed images. It is time to create a word document.

To have a better look and feel of exported document, the best approach is using the concept of Templates created by simple Word documents. With this technique we can use all the styles, formatting directly in Word, then use some placeholder to specify where you want to include fields of work Items.

image

Figure 1: A simple example of a Word Template used to export content of a Work Item

As you can see from Figure 1, a template is a simple word file where I have some special placeholder like {{title}} in the text to identify the point where I want to insert content taken from Work Items. This approach is really useful because Open XML format has a really nice feature that allows you to embed word documents inside other Word documents. This will allows me to open the template, perform substitution keeping all formatting, finally save everything to a temp file and append to the main document. With this approach I do not need to do any formatting in code, while giving the user of the tool the ability to decide the template of the output simply editing a word file.

The concept of template made extremely simple for a user to specify the formatting while keeping the code simple because it should only look for specific tokens and perform substitution.

I will really thanks Proximo S.r.L. a company I’m collaborating with for giving me the permission to share the code to manipulate Word Document, and to publish it open source. The whole code is in the example hosted in GitHub,

If take an high level look to the routine, I simply grab a reference to a list of WorkItems object, then proceed to generate a new Word Document with the help of an object called WordManipulator that contains all the routines I needs to generate a word starting from templates.

var fileName = Path.GetTempFileName() + ".docx";
using (WordManipulator manipulator = new WordManipulator(fileName, true))
{
    foreach (var workItem in workItems)
    {
        manipulator.InsertWorkItem(workItem, @"Templates\WorkItem.docx", true);
    }
}

WordManipulator class simply accept a name of a file, and a boolean value to specify if we need to create a new file, in this example I request for creation of a new file, then InsertWorkItem method will accept the template file and a boolean value that specify if you want to add a page break after the Work Item.

public void InsertWorkItem(WorkItem workItem, String workItemTemplateFile, Boolean insertPageBreak = true)
{
    //ok we need to open the template, give it a new name, perform substitution and finally append to the existing document
    var tempFile = Path.GetTempFileName();
    File.Copy(workItemTemplateFile, tempFile, true);
    using (WordManipulator m = new WordManipulator(tempFile, false))
    {
        m.SubstituteTokens(CreateDictionaryFromWorkItem(workItem));
    }

    AppendOtherWordFile(tempFile, insertPageBreak);
    File.Delete(tempFile);
}

As promised the routine is really simple, just create a temporary file name, copy the template file over it, then open with another instance of WordManipulator and call the SubstituteTokens function, passing a dictionary with all the fields of Work Items we want to export.

private Dictionary CreateDictionaryFromWorkItem(WorkItem workItem)
{
    var retValue = new Dictionary();
    retValue["title"] = workItem.Title;
    retValue["description"] = new HtmlSubstitution(workItem.EmbedHtmlContent(workItem.Description));
    retValue["assignedto"] = workItem.Fields["System.AssignedTo"].Value?.ToString() ?? String.Empty;
    retValue["createdby"] = workItem.Fields["System.CreatedBy"].Value?.ToString() ?? String.Empty;
    return retValue;
}

For this first example I export only four fields, but what it is interesting is that use an helper class called HmlSubstitution for the WorkItem.Description field, to specify to the substitution engine that I do not want a simple text substitution but I need a piece of HTML to be inserted into the document. The helper method EmbedHtmlContent was previously discussed and it is needed only to have an HTML with all the image embedded as base64.

Thanks to the concept of templates, creating a Word Document from Work Items it is just a series of  simple operations: open template, perform substitution and append to the main document.

The SubstituteTokens is a slightly more complex, because it scans all paragraphs of the document looking for keys of the substitution dictionary; when a key is found it will perform substitution using corresponding value. The code is complex because when you put a token like {{token}} inside a word file, it could be stored in XML format using more than one simple Run object (consult the ECMA for specifications). Given this premise, the code will try to find all run objects that contains the token, then perform substitution.

Even if a paragraphs seems really simple in Word, it could be saved with many Runs in OpenXml format, thus when you perform substitution you should never assume that a token will fit into an entire run.

Some of the routine are really interesting, the AppendOtherWordFile will simply append another file to the current one, and it is using the concept of the AltChunk an object in the SDK that allows me to embed one document into another. The trick about AltChunk is that it is a simple object where you can store complex data with the FeedData method, simply passing a stream as argument.

public WordManipulator AppendOtherWordFile(String wordFilePath, Boolean addPageBreak = true)
{
    MainDocumentPart mainPart = _document.MainDocumentPart;
    string altChunkId = "AltChunkId" + Guid.NewGuid().ToString();
    AlternativeFormatImportPart chunk = mainPart.AddAlternativeFormatImportPart(AlternativeFormatImportPartType.WordprocessingML, altChunkId);

    using (FileStream fileStream = File.Open(wordFilePath, FileMode.Open))
    {
        chunk.FeedData(fileStream);
        AltChunk altChunk = new AltChunk();
        altChunk.Id = altChunkId;
        mainPart.Document
            .Body
            .InsertAfter(altChunk, mainPart.Document.Body
            .Elements().LastOrDefault());
        mainPart.Document.Save();
    }
    if (addPageBreak)
    {
        _body.Append(
            new Paragraph(
            new Run(
                new Break() { Type = BreakValues.Page })));
    }
    return this;
}

The real magic is in the AddAlternativeFormatImportPart of the MainDocumentPart of the destination document, that allows you to specify the creation of a special chunk, containing a AlternativeFormatImportPartType.WorprocessingML (another word document). Thanks to this method we can create an alternate part, copying the entire content of the word document to attach and finally add this part to the original document (at the last position).

This method is so powerful that it can be used to create an alternate Import part of HTML type.

private AltChunk CreateChunkForHtmlPage(string htmlPage)
{
    var realHtml = $"{htmlPage}";
    string altChunkId = "myid" + Guid.NewGuid().ToString();
    using (MemoryStream ms = new MemoryStream(Encoding.UTF8.GetBytes(realHtml)))
    {
        // Create alternative format import part.
        AlternativeFormatImportPart formatImportPart = _document.MainDocumentPart.AddAlternativeFormatImportPart(
            AlternativeFormatImportPartType.Html,
            altChunkId);

        // Feed HTML data into format import part (chunk).
        formatImportPart.FeedData(ms);
    }
    var altChunk = new AltChunk();
    altChunk.Id = altChunkId;
    return altChunk;
}

OpenXML format is really fascinating and it is a real fantastic effort made by Microsoft to create a standard that is easy to use. As you can see from the above snippet, inserting HTML code inside a Word Document is done with a couple of calls.

All the rest of the code in the example is boilerplate, and here is the result of a test export. The code relating to this example is in GitHub with the tag 0.2.0. Here is an example of an exported document

image

Figure 2: An exported document with complex description

The original Work Item in Figure 2 was created with AIT WordToTfs tools, that allows bidirectional editing of Work Item in Word. As you can see, thanks to this tool I was able to change font in the Work Item description, and you can also verify to the Rigth that exported document maintains the formatting, and it is also using the Word template file.

Output Word Document still maintain all formatting of the template (color, bold, font etc) but also the Work Item description maintains its formatting, so the export is high fidelity.

To run this example I used this command line that allows me to specify all the information needed by the tool to export everything.

--address https://gianmariaricci.visualstudio.com 
--tokenfile C:\develop\Crypted\patOri.txt 
--teamproject "zoalord insurance" 
--iterationpath "zoalord insurance\Release 1\Sprint 6"
--areapath "zoalord insurance"

Happy new Year and Happy Azure DevOps.

Gian Maria.

Please follow and like us:

Azure DevOps API, Embed images into html

Post in the series:
1) API Connection
2) Retrieve Work Items Information

Before going to generate a Word File from Work Item Data we need to solve a little problem with HTML content in Work Item fields. As you know Azure DevOps has a rich web editor that allows you to create complex text in some fields, like Description, the problem is: whenever you copy and paste images inside the Web Editor, those images were added as Work Item attachments and the real HTML content is just a reference to the attachmen Url. If you want to generate a consistent Word or export to whatever destination you want, you should manipulate html to embed the image, or the html will be not consistent.

Focus of this article will be: how I can download attachment of Work Items and how I can embed image attachment directly in HTML code.

Html content in Work Item support images, but images are usually a reference to attachment of  the Work Item itself, thus it is not consistent because it refers protected resources.

Here is an example, I have a work item, I embedded an image in the description and the HTML content of System.Description field is an <img> tag with this src value: https://gianmariaricci.visualstudio.com/3a600197-fa66-4389-aebd-620186063db0/_apis/wit/attachments?FileID=481805&amp;FileName=System.Description.0.png. Actually this could be seen as a no-problem, because if you copy this url into a browser the image will be correctly downloaded, but the problem rely in authentication. If you are going to embed this HTML into a Word Document no one will be able to visualize the image, because word is not authenticated to Azure DevOps, thus you need to download locally and embed into the html document.

A possible approach is reference the HtmlAgilityToolkit library, then build a routine that programmatically download every attachment, and finally embeds the image in src attribute value using Base64 encoding, here is the code.

public static String EmbedHtmlContent(this WorkItem workItem, String htmlContent)
{
    HtmlDocument doc = new HtmlDocument();
    doc.LoadHtml(htmlContent);

    var images = doc.DocumentNode.SelectNodes("//img");
    if (images != null)
    {
        foreach (var image in images)
        {
            //need to understand if it is in base 64 or no, if the answer is no, we need to embed image
            var src = image.GetAttributeValue("src", "");
            if (!String.IsNullOrEmpty(src))
            {
                if (src.Contains("base64")) // data:image/jpeg;base64,
                {
                    //image already embedded
                    Log.Debug("found image in html content that was already in base64");
                }
                else
                {
                    Log.Debug("found image in html content that point to external image {src}", src);
                    //is it a internal attached images?
                    var match = Regex.Match(src, @"FileID=(?\d*)");
                    if (match.Success)
                    {
                        var attachment = workItem.Attachments
                            .OfType()
                            .FirstOrDefault(_ =&gt; _.Id.ToString() == match.Groups["id"].Value);
                        if (attachment != null)
                        {
                            //ok we can embed in the image as base64
                            WorkItemServer wise = workItem.Store.TeamProjectCollection.GetService();
                            var downloadedAttachment = wise.DownloadFile(attachment.Id);
                            byte[] byteContent = File.ReadAllBytes(downloadedAttachment);
                            String base64Encoded = Convert.ToBase64String(byteContent);
                            var newSrcValue = $"data:image/{attachment.Extension.Trim('.')};base64,{base64Encoded}";
                            image.SetAttributeValue("src", newSrcValue);
                        }
                    }
                }
            }
        }
    }

    return doc.DocumentNode.OuterHtml;
}

This code is really simple, it is an extension method of the WorkItem type so you can simply use whenever you have a reference to a Work Item. The code will simply search in all HTML text img tags, for each img tag it will verify if it already contains string base64 (because the image could be already embedded), if the answer is no, we need to download the image locally and embed.

If you look at the attachment url you can notice a FileID=xxxx that points to the attachment of the work item. With a simple regex I can find if the url conform to this pattern, and if the answer is yes, I’ll search into WorkItem.Attachments collection for the right attachment.

Work Item object in C# library has a nice Attachments collection that allows you to iterate through all attachments to find any information you need

Having a reference to the Attachment is crucial, because I need to know the extension of the file. Once the attachment object is found, I can use Store property of the work item to grab a reference to the TfsTeamProjectCollection object that allows me to grab a reference to the WorkItemServer object, that is needed to download the file locally. Thanks to C# object model, if I have a simple reference to a Work Item I can still traverse properties to grab a reference to the original collection object that was still authenticated to the server.

Using Store property of Work Item allows you to access the original Collection object that is authenticated to the server, thus you can ignore authentication problems

Once I have a reference to WorkItemServer, its method DownloadFile will simply download attachment by id to a temp local file, then a simple conversion to Base64 will perform the trick. The result is a src attribute that embed the image.

image

Figure 1: Src attribute with image embedded

Now I can simply change the attribute of the image thanks to HtmlAgilityToolkit library, and finally return modified HTML to the caller.

Now I have html code that embed all images and has no reference to external resources in Azure DevOps, so I can embed it everywhere I want without any problem.

Gian Maria.

Please follow and like us:

Git and the Hell of case sensitiveness

If you know how git works, you are perfectly aware that, even if you work in operating systems with case insensitive file system, all commit are case sensitive. Sometimes if you change the case of a folder, then commit modification of files inside that folder, you will incur into problems, because if casing of the path changes, the files are different for the Git Engine (but not for operating systems like windows).

In the long run you will face some annoying problems, like git showing that some of the files are modified (while you didn’t touch them) and you are unable to undo changes or work with those files. This problem will become really annoying during rebase operations.

Having files with only case differences is one of the most annoying problem with Git Repositories in Windows

Luckily enough, Azure DevOps has an option for Git Repository where you can have the engine prevent commits that contains file names with only case differences, to avoid this problem entirely.

image

Figure 1: Options for Cross platform compatibility can solve most headaches

The first option completely blocks pushes that contains files not compatible across platform and is the option that we are looking for, because it will block you from pushing code that will lead to case sensitiveness problems.

The other two options are equally needed, because the second one will prevent you from pushing path with forbidden names or incompatible characters (remember that this is different between Windows and Linux). Finally the third one will block pushes that contains path with unsupported length, a problem that is really nasty for Windows Users.

In the end, if you have case sensitiveness problem in your repository and you already pushed your code, because you did not have these option enabled, I can suggest you a nice tool available in GitHub that find all problems in the repository and fix them, it is called Git Unite. You can clone the project, compile in visual studio then just launch from command line giving path of a local git repository as single arguments and it will do everything automatically.

Gian Maria

Please follow and like us: