YAML Build in Azure DevOps

I’ve blogged in the past about YAML build in azure DevOps, but in that early days, that kind of build was a little bit rough and many people still preferred the old build based on visual editing in a browser. One of the main complaint was that the build was not easy to edit and there were some glitch, especially when it is time to access external services.

After months from the first version, the experience is really improved and I strongly suggest you to start trying to migrate existing build to this new system, to take advantage of having definition of build directly in the code, a practice that is more DevOps oriented and that allows you to have different build tasks for different branches.

YAML Build is now a first class citized in Azure DevOps and it is time to plan switching old build to the new engine.

You can simply start with an existing build, just edit it, select one of the phases (or the entire process) then press View YAML button to grab generated YAML build.

SNAGHTML2fecbc

Figure 1: Generate YAML definition from an existing build created with the standard editor

Now you can simply create a yaml file in any branch of your repository, paste the content in the file, commit to the branch and create a new build based on that file. I can clearly select not only AzDO repositories, but I can build also GitHub and GitHub enterprise

image

Figure 2: I can choose GitHub as source repository, not only azure repos

Then I can choose the project searching in all the project I have access to with my access token used to connect GitHub

image

Figure 3: Accessing GitHub repositories is simple, once you connected the acount with an access token AzDO can search in repositories

Just select a repository and select the option Existing Azure Pipelines, if you are starting from scratch you can create a starter pipeline.

image

Figure 4: Choose the option to use an existing pipeline.

You are ready to go, just choose branch and yaml file and the game is done.

image

Figure 5: You can directly specify the build file in the pipeline creation wizard.

Converting an existing build pipeline to YAML it is matter of no more than 10 minutes of your time.

Now you can simply run and edit your build directly from the browser, the whole process took no more than 10 minutes, including connecting my AzDO account to GitHub

image

Figure 6: Your build is ready to run inside Azure DevOPS.

Your build is now ready to run without any problem. If you specified triggers as in Figure 6 you can just push to the repository to have the build automatically kicks in and being executed. You can also directly edit the build in the browser, and pressing the Run button (Figure 6) you can trigger a run of the build without the need to push anything.

But the coolness of actual YAML build editor starts to shine when you start editing your build in the web editor, because you have intellisense, as you can see in Figure 7.

image

Figure 7: YAML build web editor has now intellisense.

As you can see the YAML build editor allows you to edit with full intellisense support, if you want to add a task, you can simply start writing task followed by a semicolon and the editor will suggest you all the tasks-available. When it is time to edit properties, you have intellisense and help for each task parameters, as well as help for the entire task. This is really useful because it immediately spots deprecated tasks (Figure 9)

image

Figure 8: All parameters can be edited with fully intellisense and help support

image

Figure 9: Helps gives you helps for the deprecation of old task that should not be used.

With the new web editor with intellisense, maintaining a YAML build is now easy and not more difficult than the standard graphical editor.

Outdated tasks are underlined in green, so you can immediately spot where the build definition is not optimal, as an example if I have a task that have a new version, the old version is underlined in green, and the intellisense suggests me that the value is  not anymore supported. This area still need some more love, but it works quite well.

There is no more excuses to still use the old build definition based on web editor, go and start converting everything to YAML definition, your life as bulid maintainer will be better :)

Gian Maria.

Find work Items in Azure DevOps: was ever operator

Query language in Azure DevOps is really rich and sometimes people really misses its power, struggling to find and manage all work item with the standard boards. There are a lot of times in a complex project when you are not able to find a specific Work Item and you feel lost because you know that it is in the system, but you just does not know how to find id.

When you have thousands or more Work Items in a TFS or AzDO instance, finding a specific Work Item can become difficult if you do not remember perfectly its title, but you remember only some general characteristics.

One not so common criteria is is: “I remember a certain work item that was assigned to me, but I cannot find anymore in my assigned work Items.”. This  can happens because it was reassigned to another member so you are not able to find it because you remember the general topic of the Work Item but you forgot the title.

SNAGHTMLd6d1f5

Figure 1: Was ever operation in action.

If you look at Figure 1, if you remember that you were assigned to a certain work item, you can use the operator “Was Ever” that searches for a value in a field in the history of Work Items. Such a query will return all Work Items you were assigned to, even if now they are assigned to a different person. Sorting for Changed Date or Created By can helps you to further refine the search and find the Work Item you are looking for.

The Was Ever operator is not available for all fields, but it is a nice, not well known feature, that can save you lots of time when you need it.

Gian Maria.

Application insight real life, avoid sending garbage to AI instance

Application Insight is a real nice azure service that allows you to instrument your application to have a complete set of  telemetries and problem diagnostics. Usually when you start using Application Insight, you also have some log in place (log4net, serilog) and you want to integrate logs with AI so some kind of logs (Ex errors) will be automatically send to AI.

All these log libraries allows you to write a sink / appender to send logs to your chosen destination, thus, sending all your serilog / log4net logs to application insight is usually a matter of minutes.

Integrating existing log library with Application Insight is a matter of few minutes of code, or just referencing some existing libraries

Even if this seems a perfect solution it is not without problem, as an example it is possible to have too much logs sent to AI from log library. In a software I’m developing there is a poller with 50ms interval on a collection in MongoDb database. For various reason a wrong record got in the collection, we have deserialization error in some of the clients, and Boom, we have an exception log each 50 ms for 4 machines, and it is 288k of logs for each hour. Net result is that the instance of Application Insight was full of unnecessary and identical logs.

Another potential problem is logging every mongo query (polling will flood ai with lots of identical query with less than 1 ms execution time), this will simply pollute your AI instance of rubbish data.

Luckily enough, AI has the ability to intercept any logs to filter it out before it is sent to azure, the secret is implementing ITelemetryProcessor interface, because it allows you to filter out unwanted logs.

public void Process(ITelemetry item)
{
    if (item is DependencyTelemetry dependencyTelemetry)
    {
        if (!OkToSend(dependencyTelemetry))
        {
            return;
        }
    }
    else if (item is ExceptionTelemetry exceptionTelemetry)
    {
        if (!OkToSend(exceptionTelemetry))
        {
            return;
        }
    }
    else if (item is EventTelemetry eventTelemetry)
    {
        if (!OkToSend(eventTelemetry))
        {
            return;
        }
    }
  _next.Process(item);

As you can see, implementing process method leaves complete freedom on filtering the telemetry, because if you do not want it to be sent to AI, simple do not call _next.Process. 

Application insight is really extendible and have tons of extensibility points to customize information. In this article I’m dealing with the ITelemetryProcessor that allow you to manipulate and optionally completely discard a telemetry before it is sent to the server.

Once you create your telemetry processor you simply need to add to the default chain. For this example I have a simple component that does a couple of things, it will avoid flooding AI with identical errors and avoid to track any mongo dependencies that last for less than a certain number of ms.

var builder = TelemetryConfiguration.Active.TelemetryProcessorChainBuilder;

Int32 limitQueryLogInMs = 50;
if (Int32.TryParse(MongoDependencyMinDurationInMilliseconds ?? String.Empty, out Int32 limitDuration))
{
    limitQueryLogInMs = limitDuration;
}

builder.Use((next) => new JarvisApplicationInsightFilter(next, limitQueryLogInMs));
builder.Build();

This solution was developed because in this project, the customer is starting using AI mainly to monitor for problems so we are interested only in slow mongo dependencies.

The logic inside OkToSend is 100% custom for your need. In this example we want to filter out any dependencies that have some special string in the name, we want also to filter out all dependencies that have url custom property with a certain string. The first two line assures that any dependency that failed will be sent to AI without any further filtering (we do not want to miss a failed dependency because it falls under duration filter)

private bool OkToSend(DependencyTelemetry dependencyTelemetry)
{
    //always log dependency that failed,
    if (dependencyTelemetry.Success != true)
        return true;

    //ok filter out
    if (_forbiddenTelemetryStrings.Any(u => dependencyTelemetry.Data.IndexOf(u, StringComparison.OrdinalIgnoreCase) >= 0))
    {
        return false;
    }

    if (dependencyTelemetry.Properties.TryGetValue("Url", out var url))
    {
        if (_forbiddenUrl.Any(u => url.IndexOf(u, StringComparison.OrdinalIgnoreCase) >= 0))
        {
            return false;
        }
    }

The second feature of this processor is avoid flooding AI if an exception trace goes south. The goal is avoiding too many identical exception in a short time range; if some part of the application is experiencing errors, we want at max 1 exception trace in a given interval. An error is symptom that something is wrong, if the Db is not reachable and I’m polling each 50ms, an exception each 5 minutes is perfectly ok and it is far better than an exception each 50ms.

It is always a good practice to limit the number of logs you send to Application Insight, to avoid using too much bandwidth and clutter the server with garbage

For this reason we have a simple antiflood component to monitor how often we are sending telemetry identified with a give string, that uniquely identify the call. As an example if I configure my antiflood with 5 minutes interval, whenever I call antiflood.ShouldPass method with a given string key, it returns true not more than one time each 5 seconds for the same string.

To have an unique string for an exception I use three properties, Loggername (the name of the class that is logging), Servicename (the name of the logical microservices that is logging) and finally User (if present is the name of the user that request a command, called api etc). The goal is avoid flooding the server, but also avoiding to miss some important exception.

private bool OkToSend(ExceptionTelemetry exceptionTelemetry)
{
    //Some exception can be rogue, like pollers, they tend to flood the log with unuseful exception.
    String exceptionKey = "EXCEPTION/";
    var propertyNames = new[] { "Loggername", "ServiceName", "User" };
    foreach (var propertyName in propertyNames)
    {
        if (exceptionTelemetry.Properties.TryGetValue(propertyName, out var propertyValue))
        {
            //a single logger can go rogue.
            exceptionKey += propertyValue + "/";
        }
    }

    exceptionKey += exceptionTelemetry.Message.Substring(0, Math.Min(exceptionTelemetry.Message.Length, 50));
    return _antiflooder.ShouldPass(exceptionKey);
}

If a single poller gone wild, I got no more than 1 exception each 5 minutes, but if the entire db is down, I got more exception because each 5 minutes I can have an exception for each logger, services and unique user. Usually this lead to hundreds of exception each minute, but this is a clear indication that something catastrophic is happening (db is down and every single component that depends from the db is failing).

Finally, the antiflooder maintains count of the number of discharged exceptions, and a timer will evict each string after the threshold time passed. This will allows me to log how many exception were really logged during the antiflood interval.

private void Antiflooder_EventEvicted(object sender, Antiflooder.AntiFlooderEntryEvictedEventArgs e)
{
    if (e.Count > 1)
    {
        TraceTelemetry traceTelemetry = new TraceTelemetry();
        traceTelemetry.SeverityLevel = e.Entry.StartsWith("EXCEPTION") ? SeverityLevel.Error : SeverityLevel.Information;
        traceTelemetry.Message = $"Antiflood got N°{e.Count} occurrences of {e.Entry} in an interval of {e.GetElapsedSeconds()} seconds";
        ApplicationInsightConfiguration.Instance.GetTelemetryClient().TrackTrace(traceTelemetry);
    }
}

If I have more than one calls, it means that the antiflood discharged some logs, but I want to trace how many error logs were really generated.

With few lines of code I was able to avoid my application to send tons of unnecessary traces (dependency and errors) to AI instance, keeping only interesting traces.

I was really amazed by the flexibility of Application Insight library, because it allows me to have full control of the trace pipeline with few line of code. This example was made with C#, but you can obtain the same result with SDK for other tecnologies.

Gian Maria

Is Manual Release in Azure DevOps useful?

When people creates a release in AzureDevOps, they primarily focus on how to make the release automatic, but to be 100% honest, automation in only one side of the release, and probably not the more useful.

First of all Release is about auditing and understand which version of the software is released where and by whom. In this scenario what is more important is “how I can deploy my software in production”.

image

Figure 1: Simple release composed by two steps.

In Figure 1 I created a simple release with two stages, this will clearly states that to go to production I need to deploy on a Test Machine stage, then Deploy to production. I do not want to give you a full tutorial, MSDN is full of nice example, but if you look at the picture, you can notice small user icons before each stage, that allows you to specify who can approve a release in that stage or if the release should start automatically.

What is more important when you plan the release, is not how you can automate the deployment, but how to structure release flow: stages, people, etc.

As I strongly suggests, even if you do not have any idea on how to automate the release, you MUST have at least a release document, that contains detailed instruction on how to install the software. When you have a release document, you can simply add that document to source control and create a release definition completely manual.

If the release document is included in the source control, you can simply store as result of build artifacts, then it will be automatically downloaded and you can simply create a release like this. In Figure 2 you can see a typical two phase release for stages defined in Figure 1.

image

Figure 2: A manual release uses a copy file to copy artifacts to a well known location then a manual step.

I usually have an agent based phase because I want to copy artifacts data from the agent directory to a well known directory. Agent directory is clumsy, and can be deleted by cleanup so I want my release files to be copied in a folder like c:\release (Figure 3)

image

Figure 3: The only automatic task is copying all the artifacts to c:\release folder

After this copy, I have another phase, this time it is agentless, because it has only a Manual deploy action, that simply specify to the user where to find the instruction for manual deploy.

image

Figure 4: Manual step in the release process

This is important because as you can see in Figure 4 I can instruct the user on where to find release document (it is in source code, the build add to the artifact, and finally it is copied in the release folder). Another important aspect is the ability to notify specific user to perform manual task.

Having a manual release document stored in source code allows you to evolve the file along with the code. Each code version has correct release document.

Since I use GitVersion to change name of the build based on GitVersion tags I prefer going to the Options tab of the release and change the name of the release.

image

Figure 5: Configure release name format to include BuildNumber

Release usually have a simple increasing number $(Rev:rr), but having something like Release-34 as release name does not tell me anything useful. If you are curious on what you can use in the Release Name Format field, you can simply check official documentation. From that document you read that you can use BuildNumber, that will contain build number of first artifact of the build. In my opinion this is a real useful information if the build name contains GitVerison tags, it allows you to have a meaningful release name.

SNAGHTML25af5e9

Figure 6: New release naming in action.

If you look at Figure 6 you can argue that build name is visible below release number, so the new numbering method (1) does not add information respect the standard numbering with only increase number (2).

This is true until you do not move to the deployment group or in other UI of Release Management, because there are many places where you can see only release Name. If you look at Figure 7 you can verify that with the old numbering system I see the release number for each machine, and for machine (1) I know that the latest release is Release 16, while for a machine where I release after numbering change I got Release – 34 – JarvisPackage 4.15.11

image

Figure 7: Deployment groups with release names

Thanks to release file definition and Azure DevOps release management, I have a controlled environment where I can know who started a release, which are the artifact deployed to each environment and who perform manual deploy.

Having a release definition completely manual is a great starting point, because it states what, who and how your software must be deployed, and will log every release, who started it, who performed manual  release, who approved release for every stage, etc.

Once everything works, I usually start writing automation script to automate steps of the release document. Each time a step is automated, I remove it from the deploy document or mark explicitly as “not to do because it is automated”.

Happy DevOps.

WIQL editor extension For Azure DevOps

One of the nice feature of Azure DevOps is extendibility, thanks to REST API you can write addins or standalone programs that interacts with the services . One of the addin that I like the most is the Work Item Query Language Editor, a nice addin that allows you to interact directly with the underling syntax of Work Item query.

Once installed, whenever you are in query Editor, you have the ability to directly edit the query with WIQL syntax, thanks to the “Edit Query wiql” menu entry.

image

Figure 1: Wiql query editor new menu entry in action

As you can see in Figure 2, there are lots of nice feature in this addin, not only the ability to edit a query directly in WIQL syntax.

image

Figure 2: WIQL editor in action

You can clearly edit and save the query (3) but you can also export the query into a file that will be downloaded into your pc, and you can then re-import in a different Team Project. This is a nice function if you want to store some typical queries somewhere (source control) then re-import in different Team Project, or for different organization.

If you start editing the query, you will be amazed by intellisense support (Figure 3), that guides you in writing correct query, and it is really useful because it contains a nice list of all available fields.

image

Figure 3: Intellisense in action during Query Editor.

The intellisense seems to actually using API to grab a list of all the valid fields, because it suggests you even custom fields that you used in your custom process. The only drawback is that it lists all the available fields, not only those one available in the current Team Project, but this is a really minor issue.

Having intellisense, syntax checking and field suggestion, this addin is a really must to install in your Azure DevOps instance.

image

Figure 4: Intellisense is available not only on default field, but also on custom fields used in custom process.

If you are interested in the editor used, you can find that this addin uses the monaco editor, another nice piece of open source software by Microsoft.

Another super cool feature of this extension, is the Query Playground, where you can simply type your query, execute it and visualize result directly in the browser.

image

Figure 5: Wiql playground in action, look at the ASOF operator used to issue query in the past.

As you can see from Figure 5, you can easily test your query, but what is most important, the ASOF operator is fully supported and this guarantees you the ability to do historical queries directly from the web interface, instead of resorting using the API. If you need to experiment with WIQL and you need to quick create and test a WIQL query, this is the tool to go.

I think that this addin is really useful, not only if you are interacting with the service with REST API and raw WIQL, but also because it allows you to export/import queries between projects/organization and allows you to execute simply historycal queries directly from the ui.

Having the full support of WIQL allows you to use features that are not usually available through the UI, like the ASOF operator.

As a last trick, if you create a query in the web UI, then edit with this addin and add ASOF operator then save, the asof will be saved in the query, so you have an historical query executable from the UI. The only drawback is that, if you modify the query with the web editor and then save, the ASOF operator will be removed.

Gian Maria.