Quick Peek at Microsoft Security Code Analysis: Credential Scanner

Microsoft Security Code Analysis contains a set of Tasks for Azure DevOps pipeline to automate some security checks during building of your software. Automatic security scanning tools are not a substitute in any way for human security analysis, remember: if you develop code ignoring security, no tool can save you.

Despite this fact, there are situation where static analysis can really give you benefit, because it can avoid you some simple and silly errors, that can lead to troubles. All Tasks in Microsoft Security Code Analysis package are designed to solve a particular problem and to prevent some common mistake.

Remember that security cannot be enforced only with automated tools; nevertheless they are useful to avoid some common mistakes and are not meant to replace security audit of your code.

The first task I suggest you to look at is Credential Scanner, a simple task that searches source code for potential credentials inside files.

image

Figure 1: Credential scanner task

Modern projects, especially those designed for the cloud, use tons of sensitive data that can be mistakenly stored in source code. The easiest mistake is storing credential for databases or other services inside configuration file, like web.config for ASP.Net projects or we can left some Token for Cloud resource or services, leaving that resource unprotected.

Including Credential Scanner in your azure pipeline can save you troubles, with minimal configuration you can have it scan your source code to find credentials. All you need to do is drop the task in the pipeline, use default configuration and you are ready to go. Full details on configuring the task could be found here.

image

Figure 2: Configuration pane for Credential Scanner

Credential scan will run in your pipeline and report problem found.

image

Figure 3: Credential scanner found a match.

If you look in Figure 3: Credential scanner found a match, but the task does not make the build fails (as you could expect). This is normal behvior, because all security tasks are meant to produce an output file with scan result, and it is duty of another dedicated task to analyze all results file and make the build fail if problems are found.

It is normal to have security related tasks not to fail the build immediately, a dedicated tasks is needed to analyze ALL log files and fail the build if needed

Post Analysis task is your friends here.

image

Figure 4: Add a Post Analysis task to have the build fails if some of the security related task failed

Actually this special task allows you to specify which of the security task you want to analyze and this is the reason why the build does not fails immediately when Credential Scanner found a problem. The goal here is running ALL security related tasks, then analyze all of them and have the build fails if problems where found.

image

Figure 5: Choose which analyzer you want to use to make the build fail.

After you added this task at the end of the build, your build fails if security problems are found.

image

Figure 6: Build fails because some of the analysis found some problems. In this specific situation we have credentials in code.

As you can see from Figure 6 Credential Scan task is green and is the Security Post Analysis Task that made the build fails. It also log some information in build errors page as you can see from Figure 7.

image

Figure 7: Build fails for issues in credential scanner

Now the final question is: where can I found the csv file generated by the tool? The answer is simple, there is another special task whose purpose is upload all logs as artifacts of the build.

image

Figure 8: Simply use the PublishSecurityAnalysisLog task to have all security related logs published as artifacts.

As you can see from Figure 9 all the logs are correctly uploaded as artifacts and divided by tool type.  In this example I’ve ran only the Credential Scanner Tool so it is the only output I have in my artifacts folder.

image

Figure 9: Credential Scanner output included as artifact build.

Downloading the file you can open it with excel (I usually use csv file output for Credential Scanner) and find what’s wrong.

image

Figure 10: Csv output contains the file with the error, the number of the line but everything else is redacted out for security

As I can verify from csv output, I’ve some problem at line 9 of config.json file, time to look at the code and find the problem.

SNAGHTML7881da

Figure 11: Password included in a config file.

In CSV output file, Credential Scanner task only store file, row number and hash of the credential found, this is needed to avoid the credential leak from build output.

Now, this example was made for this post, so do not try that password against me, it will just not work :). If you think that you never fall for this silly mistake remember that noone is perfect. Even if I’m trying to avoid these kind of errors, I must admit that some years ago I was contacted by a nice guy that told me that I’ve left a valid token in one of my sample source. Shame on me, but this kind of errors could happen. Thanks to Credential Scanner you can really mitigate them.

If you wonder what kind of rules the task uses to identify password, the documentation states that

CredScan relies on a set of content searchers commonly defined in the buildsearchers.xml file. The file contains an array of XML serialized objects that represent a ContentSearcher object. The program is distributed with a set of searchers that have been well tested but it does allow you to implement your own custom searchers too.

So you can download the task, and examine the dll, but the nice aspect is that you can include your own searcher too.

If the tool find false positive and you are really sure that the match is really a false positive, you can use an exclude file as for the documentation.

image

Figure 12: Suppression rules for the task.

I must admit that Credential Scanner is really a powerful tool that should be included in every build, especially if you are developing open source code. Remember that there are lots of tools made to scavenge projects for this kind of vulnerabilities in code, so, if you publish some sensitive password or keys in open source project, it constitutes a big problem. Sooner or later this will bite you.

Gian Maria

Release app with Azure DevOps Multi Stage Pipeline

MultiStage pipelines are still in preview on Azure DevOps, but it is time to experiment with real build-release pipeline, to taste the news. The Biggest limit at this moment is that you can use Multi Stage to deploy in Kubernetes or in the cloud, but there is not support for agent in VM (like standard release engine). This support will be added in the upcoming months but if you use azure or kubernetes as a target you can already use it.

My sample solution is in GitHub, it contains a real basic Asp.NET core project that contains some basic REST API and a really simple angular application. On of the advantage of having everything in the repository is that you can simply fork my repository and make experiment.

Thanks to Multi Stage Pipeline we finally can have build-test-release process directly expressed in source code.

First of all you need to enable MultiStage Pipeline for your account in the Preview Features, clicking on your user icon in the upper right part of the page.

image

Figure 1: Enable MultiStage Pipeline with the Preview Features option for your user

Once MultiStage Pipeline is enables, all I need to do is to create a nice release file to deploy my app in azure. The complete file is here https://github.com/alkampfergit/AzureDevopsReleaseSamples/blob/develop/CoreBasicSample/builds/build-and-package.yaml and I will highlight the most important part here. This is the starting part.

image 

Figure 2: First part of the pipeline

One of the core differences from a standard pipeline file is the structure of jobs, after trigger and variables, instead of directly having jobs, we got a stages section, followed by a list of stages that in turns contains jobs. In this example the first stage is called build_test, it contains all the jobs to build my solution, run some tests and compile Angular application. Inside a single stage we can have more than one job and in this particular pipeline I divided the build_test phase in two sub jobs, the first is devoted to build ASP.NET core app, the other will build the Angular application.

image

Figure 3: Second job of first stage, building angular app.

This part should be familiar to everyone that is used to YAML pipeline, because it is, indeed, a standard sequences of jobs; the only difference is that we put them under a stage. The convenient aspect of having two distinct jobs, is that they can be run in parallel, reducing overall compilation time.

If you have groups of taks that are completely unrelated, it is probably bettere to divide in multiple jobs and have them running in parallel.

The second stage is much more interesting, because it contains a completely different type of job, called deployment, used to deploy my application.

image

Figure 4: Second stage, used to deploy the application

The dependsOn section is needed to specify that this stage can run only after build_test stage is finished. Then it starts jobs section that contains a single deployment job. This is a special type of job where you can specify the pool, name of an environment and then a strategy of deploy; in this example I choose the simplest, a run once strategy composed by a list of standard tasks.

If you ask yourself what is the meaning of environment parameter, I’ll cover it in much extension on a future post, for this example just ignore it, and consider it as a way to give a name to the environment you are deploying.

MultiStage pipeline introduced a new job type called deployment, used to perform deployment of your application

All child steps of deployment job are standard tasks used in standard release, the only limitation of this version is that they run on the agent, you cannot run on machine inside environment (you cannot add anything else than kubernetes cluster to an environment today).

The nice aspect is that, since this stage depends on build_test, when deployment section runs, it automatically download artifacts produced by previous stage and place them in folder $(Pipeline.Workspace) followed by another subdirectory that has the name of the artifacts itself. This solves the need to transfer artifact of the first stage (build and test) to deployment stage

image

Figure 5: Steps for deploying my site to azure.

Deploying the site is really simple, I just unzip asp.NET website to a subdirectory called FullSite, then copy all angular compiled file in www folder and finally use a standard AzureRmWebAppDeployment to deploy my site to my azure website.

Running the pipeline shows you a different user interface than a standard build, clearly showing the result for each distinct stage.

image

Figure 6: Result of a multi stage pipeline has a different User Interface

I really appreciate this nice graphical representation of how the stage are related. For this example the structure is is really simple (two sequential steps), but it shows clearly the flow of deployment and it is invaluable for most complex scenario. If you click on Jobs you will have the standard view, where all the jobs are listed in chronological order, with the Stage column that allows you to identify in which stage the job was run.

image

Figure 7: Result of the multi stage pipeline in jobs view

All the rest of the pipeline is pretty much the same of a standard pipeline, the only notable difference is that you need to use the stage view to download artifacts, because each stage has its own artifacts.

image

Figure 8: Downloading artifacts is possible only in stages view, because each stage has its own artifacs.

Another nice aspect is that you can simply rerun each stage, useful is some special situation (like when your site is corrupted and you want to redeploy without rebuilding everything)

Now I only need to check if my sites was deployed correctly and … voilà everything worked as expected, my site is up and running.

image

Figure 9: Interface of my really simple sample app

Even if MultiStage pipeline is still in preview, if you need to deploy to azure or kubernetes it can be used without problem, the real limitation of actual implementation is the inability to deploy with agents inside VM, a real must have if you have on-premise environment.

On the next post I’ll deal a little more with Environments.

Gian Maria.

How to delete content in Azure DevOps wiki

Today I got a simple but interesting question about Azure DevOps, how can I completely delete the content of the wiki? There are not so many reason for this, but sometimes you really want to start from scratch. Now suppose you have your wiki:

image

Figure 1: Wiki with a simple page

You have created some pages, you played a little bit with the wiki, you attached some cute pets photo and content to the wiki itself, maybe just to gain familiarity with the wiki itself.

image

Figure 2: Wiki with some content on it.

Now you want to delete everything, such as that no member of the team should be able to retrieve pages and content anymore.

Azure DevOps Wiki are nothing more than a Git Repository with MarkDown content, so you can directly manipulate git repository if you need to alter wiki history

To do a low level manipulation of the wiki, you should simply clone wiki repository locally, you can simply find repository url in the UI

image

Figure 3: Clone wiki repository from the ui.

That menu option simply lets you to grab url of the repository, then you can simply clone the repository locally and verify all the commits done in the wiki. (I use command line but you can use any UI of you choiche)

image

Figure 4: Content of the wiki, a simple git repository

Now if you look a Figure 4 you can notice that the wiki is nothing more than a git repository with a commit for each modification you did to the wiki. Now, if you really want to reset everything and start wiki from scratch, you can simply issue a

git reset --hard SHA_OF_FIRST_COMMIT

Where SHA_OF_FIRST_COMMIT is the address of the very first commit, the one with the comment Initializing wiki, in my example 86ec4c9. After the command was executed your local wikiMaster branch point to the very first commit of the repository, an empty wiki.

image

Figure 5: Your local wikiMaster branch was reset to the very first commit, now wikiMaster point to an empty wiki

Now you can simply push with –force option to reset remote branch to the very same commit.

git push --force

Open again wiki page to verify that now it reverted to the original version. Actually the server still has the previous commit in the database, but they are not reachable anymore and they will be deleted over time by internal garbage collection.

Resetting to the very first commit actually delete everything from the wiki, restoring it to its pristine content

This scenario is not really common, but a real common scenario is when you mistakenly write something in the wiki, save the page and then you want to delete what you have written. There are lots of reason for this requirement, you mistakenly inserted sensitive data like passwords or tokens, or you simply write something that you want to permanently delete.

If you look to Figure 4, suppose you simply paste a wrong image and you want to remove that image and all related content from the history of the page. If you simply edit the wiki page, remove the image, then save again the page, the data is still in the history, anyone can find again the content you want to remove. The only solution is to rewrite git history.

Since a Wiki is a git repository, everything you did remain in history of the page, if you included sensitive information, even if you edit the page, removed that information and save again is not enough.

From Figure 4 you can verify that the incriminated commit is 97e520e. If you followed my previous example you can simply reset everything to the previous commit, actually deleteing every content that was inserted after that commit.

git reset --hard 97e520e^

Special char ^ indicates first parent of a commit, so previous instruction tell git to reset to the commit parent of bad commit. After this operation a git push – force will reset the branch from the server. The incriminated content is now gone, along with every content that was inserted after. Actually you restored wiki content to a past point in time.

Git reset –hard in your wiki repository allows you to restore a Wiki on a point in time, but everything that happened after that moment will be lost.

This is not a perfect approach, suppose you realize that someone stored a password in the wiki some days ago, you do not want to lose everything but simply remove that specific content and leaving other commit unchanged. Thanks to git flexibility you can obtain this operation with an interactive rebase.

git rebase 97e520e^ -i

This will actually trigger a complete rewrite of the history from the parent of the incriminated commit to the last commit of the wiki. I’m not going to give you a complete explanation of an interactive rebase, but basically you are presented with the list of all commits, starting with the commit you want to delete to the latest commit in the branch.

image

Figure 6: Delete the commit with interactive rebase.

In Figure 6 you are seeing an example in which I have a single commit after the one you want to remove, but nothing changes if you have tons of commits after. You simply need to change the command for the first commit (the commit you want to delete) from pick to d (delete). Leave all other rows unchanged. Then simply save the script to continue (if you are not familiar with VIM simply press I to edit the file, change the file then press ESC to come back in command mode and press : then w then q then ENTER).

This command actually deletes only the commit you want to delete, leaving all following commits unchanged. You actually scissor knife removed a single bad save from your wiki.

image

Figure 7: Commit was removed, local branch has not anymore commit 97e520e

Now you should be 100% sure that no one else modified the wiki in the short timespan you need to clone and rebase the repository so you can issue a git push –force to overwrite content of the repo on AzDo instance.

A git interactive rebase is an operation where you are rewriting history, so you can selectively remove a single commit from the history.

This will actually preserve all content of the wiki, you only removed a single commit from the wiki. There is no more history of that commit inside the Wiki. (actually deleted commit is still unreachable on the server, but there is no way for other to retrieve it).

If you want to completely remove a page with all the history of that page, you need to delete multiple commits, but luckily git has a filter-branch or more advanced comment. You can find more detail here https://help.github.com/en/articles/removing-sensitive-data-from-a-repository

Have I ever told you how much I love Git? :)

Gian Maria.

Azure DevOps and SecDevOps

One of the cool aspect of Azure DevOps is the extendibility through marketplace api, and for security you can find a nice marketplace addin called Owasp ZAP (https://marketplace.visualstudio.com/items?itemName=kasunkodagoda.owasp-zap-scan) that can be used to automate OWASP test for web application.

You can also check this nice article in MSDN https://devblogs.microsoft.com/premier-developer/azure-devops-pipelines-leveraging-owasp-zap-in-the-release-pipeline/ that explain how you can leverage OWASP ZAP analysis during a deploy with release pipeline.

REally good stuff to read / use.

WIQL editor extension For Azure DevOps

One of the nice feature of Azure DevOps is extendibility, thanks to REST API you can write addins or standalone programs that interacts with the services . One of the addin that I like the most is the Work Item Query Language Editor, a nice addin that allows you to interact directly with the underling syntax of Work Item query.

Once installed, whenever you are in query Editor, you have the ability to directly edit the query with WIQL syntax, thanks to the “Edit Query wiql” menu entry.

image

Figure 1: Wiql query editor new menu entry in action

As you can see in Figure 2, there are lots of nice feature in this addin, not only the ability to edit a query directly in WIQL syntax.

image

Figure 2: WIQL editor in action

You can clearly edit and save the query (3) but you can also export the query into a file that will be downloaded into your pc, and you can then re-import in a different Team Project. This is a nice function if you want to store some typical queries somewhere (source control) then re-import in different Team Project, or for different organization.

If you start editing the query, you will be amazed by intellisense support (Figure 3), that guides you in writing correct query, and it is really useful because it contains a nice list of all available fields.

image

Figure 3: Intellisense in action during Query Editor.

The intellisense seems to actually using API to grab a list of all the valid fields, because it suggests you even custom fields that you used in your custom process. The only drawback is that it lists all the available fields, not only those one available in the current Team Project, but this is a really minor issue.

Having intellisense, syntax checking and field suggestion, this addin is a really must to install in your Azure DevOps instance.

image

Figure 4: Intellisense is available not only on default field, but also on custom fields used in custom process.

If you are interested in the editor used, you can find that this addin uses the monaco editor, another nice piece of open source software by Microsoft.

Another super cool feature of this extension, is the Query Playground, where you can simply type your query, execute it and visualize result directly in the browser.

image

Figure 5: Wiql playground in action, look at the ASOF operator used to issue query in the past.

As you can see from Figure 5, you can easily test your query, but what is most important, the ASOF operator is fully supported and this guarantees you the ability to do historical queries directly from the web interface, instead of resorting using the API. If you need to experiment with WIQL and you need to quick create and test a WIQL query, this is the tool to go.

I think that this addin is really useful, not only if you are interacting with the service with REST API and raw WIQL, but also because it allows you to export/import queries between projects/organization and allows you to execute simply historycal queries directly from the ui.

Having the full support of WIQL allows you to use features that are not usually available through the UI, like the ASOF operator.

As a last trick, if you create a query in the web UI, then edit with this addin and add ASOF operator then save, the asof will be saved in the query, so you have an historical query executable from the UI. The only drawback is that, if you modify the query with the web editor and then save, the ASOF operator will be removed.

Gian Maria.