Consume Azure DevOps feed in TeamCity

Azure DevOps has an integrated feed management you can use for nuget, npm, etc; the feed is private and only authorized users can download / upload packages. Today I had a little problem setting up a build in Team City that uses a feed in Azure Devops, because it failed with 201 (unauthorized)

The problem with Azure DevOps NuGet feeds, is how to authenticate other toolchain or build server.

This project still have some old build in TeamCity, but when it starts consuming packages published in Azure Devops, TeamCity builds start failing due 401 (unauthorized) error. The question is: How can I consume an Azure DevOps nuget feed from agent or tools that are not related to Azure Devops site itself?

I must admit that this information is scattered in various resources and Azure DevOps, simply tells you to add a nuget.config in your project and you are ready to go, but this is true only if we are using Visual Studio connected to the account.

image

Figure 1: Standard configuration for nuget config to point to the new feed

Basic documentation forget to mention authentication, except from Get The Tool instruction, where you are suggested to download the Credential Provider if you do not have Visual Studio.

image

Figure 2: Instruction to get NuGet and Credential Provider

Credential provider is useful, but is really not the solution, because I want a more Nuget Friendly solution, the goal is: when the agent in TeamCity is issuing a Nuget restore, it should just work.

A better alternative is to use standard NuGet authentication mechanism, where you simply add a source with both user and password. Lets start from the basic, if I use NuGet command line to list packages for my private source I got prompted for a user.

image

Figure 3: NuGet asking for credential to access a feed

Now, as for everything that involves Azure DevOps, when you are asked for credential, you can use anything for the username and provide an Access Token as a password.

image

Figure 4: Specifying anything for user and my accesstoken I can read the feed.

This the easiest and more secure way to login in Azure DevOps with command line tools, especially because to access the feed I’ve generated a token that have a really reduced permission.

SNAGHTML2160fc9

Figure 5: Access token with only packaging read permission

As you can see in Figure 5, I created a token that has only read access to packaging, if someone stoles it, he/she can only access packages and nothing more.

Now the question is: how can I made TeamCity agent to use that token? Actually TeamCity has some special section where you can specify username and password, or where you can add external feed, but I want a general way to solve for every external tool, not only for Team City. It is time to understand how nuget configuration is done.

Nuget can be configured with nuget.config files placed in some special directories, to specify configuration for current user or for the entire computer

Standard Microsoft documentation states we can have three levels of nuget.config for a solution, one file located in the same directory as solution file, then another one with user scope and is located at %appdata%/Nuget/nuget.config, finally we can have settings for all operation for entire computer in %programfiles(x86)%/nuget/config/nuget.config.

You need to know also, from the standard nuget.config reference documentation, that you can add credentials directly into nuget.config file, because you can specify username and password for each package source. Now I can place, for each computer with an active TeamCity agent, a nuget.config at computer level to specify credential for my private package and the game is done. I created just a file and uploaded in all Team City computers with agents.

The real problem with this approach is that the token is included in clear form in config.file located in c:\program files (x86)/nuget/config.

At this point you can hit this issue https://github.com/NuGet/Home/issues/3245 if you include clear text password in nuget.config password field, you will get a strange “The parameter is incorrect” error, because clear text password should be specified in ClearTextPassword parameter. After all, who write a clear text password in an file?

This leads to the final and most secure solution, in your computer, download latest nuget version in a folder, then issue this command

 .\nuget.exe sources add -name proximo
  -source https://pkgs.dev.azure.com/xxxxx/_packaging/yyyyyy/nuget/v3/index.json 
  -username anything 
  -password **myaccesstoken**

This command will add, for current user, a feed named proximo, that points to the correct source with username and password. After you added the source, you can simply go to %appdata%/nuget/ folder and open the nuget.config file.

image

Figure 6: Encrypted credentials stored inside nuget.config file.

As you can see, in your user configuration nuget.exe stored an encrypted version of your token, if you issue again a nuget list –source xxxxx you can verify that nuget is able to automatically log without any problem because it is using credentials in config file.

The real problem of this approach is that credentials are encrypted only for the machine and the user that issued the command, you cannot reuse in different machine or in the same machine with a different user.

Nuget password encryption cannot be shared between different users or different computers, making it works only for current user in current computer.

image

Figure 7: Include clear text password in machine nuget.config to made every user being able to access that specific feed

To conclude you have two options: you can generate a token and include in clear text in nuget.config and copy it to all of your TeamCity build servers. If you do not like clear text tokens in files, have the agent runs under a specific build user , login in agent machine with that specific build user  and issue nuged add to have token being encrypted for that user.

In both cases you need to renew the configuration each year (token last for at maximum one year). (You can automate this process with a special build that calls nuget sources add).

Gian Maria.

BruteForcing login with Hydra

Without any doubt, Hydra is one of the best tool to bruteforce passwords. It has support for many protocols, but it can be used with standard web sites as well forcing a standard POST based login. The syntax is a little bit different from a normal scan, like SSH and is similar to this cmdline.

./hydra -l username -P x:\temp\rockyou.txt hostname –s port http-post-form “/loginpage-address:user=^USER^&password=^PASS^:Invalid password!”

Dissecting the parameters you have

-l: specify a username you want to try, you can also specify a file containing all the username you want to try
-P: specify a file with all the password you want to try, rockyou.txt is a notable standard
-s: service, it should be the port the site is listening to

After these three parameters it comes the part needed to select the site and the payload you want to sent to the site. You start with http-post-form to specify that you want POST request with form urlencoded, followed by a special string composed by three parts separated by semicolon.

The first part is the page that will be called, the second part is the payload, with ^USER^ and ^PASS^ placeholder that will be substituted by Hydra at each tentative, finally, the last part is the text that hydra should look to understand if access is denied. Once launched it will try to bruteforce the password with tremendous speed.

image

Figure 1: Hydra in action

As you can see it works perfectly also on Windows.

Gian Maria.

Security in 2019, still unprotected ElasticSearch instance exists

I’ve received today a notification from https://haveibeenpwned.com/ because one of my emails was present in a data breach.

image

Ok, it happens, but two things disturbed me, the first is that I really never heard of those guys (People Data Labs), this because they are one of the companies that harvest public data from online sources, aggregates them and re-sell as “Data enrichment”. This means that they probably have only public data on me. If you are interested you can read article by Troy Hunt https://www.troyhunt.com/data-enrichment-people-data-labs-and-another-622m-email-addresses/ on details about this breach.

But the second, and more disturbing issue is that, in 2019, still people left ElasticSearch open and unprotected in the wild. This demonstrates really low attention about security, especially in situation where you have Elasticsearch on server that have a public exposure. It is really sad to see that Security is still a second citizen in software development, if not, such trivial errors would not be done.

Leaving ElasticSearch unprotected binded to a public IP address is sign of zero attention to security.

If you have ElasticSearch in a server with public access (such as machines in the cloud) you should:

  • 1) Pay for Authentication Module so you can secure your instance or, at least, use some open source module for basic auth). No Elasticsearch that is installed in a machine with a public access can be left without auth.
  • 2) Bind ElasticSearch instance on private ip address only (the same used by all legitimate machine that should contact ES), be 100% sure that it does not listen on public address.
  • 3) add firewall rule to explicitly close port 9200 for public networks (in case someone messed with rule 2)
  • 4) Open port 9200 only for internal ip that are legitimate to access Elastic Search

Finally you should check with some automatic tool if, for any reason, your ES instance starts responding on port 9200 in public ip address, to verify if someone messed with the aforementioned rules.

Gian Maria.

Quick Peek at Microsoft Security Code Analysis: Credential Scanner

Microsoft Security Code Analysis contains a set of Tasks for Azure DevOps pipeline to automate some security checks during building of your software. Automatic security scanning tools are not a substitute in any way for human security analysis, remember: if you develop code ignoring security, no tool can save you.

Despite this fact, there are situation where static analysis can really give you benefit, because it can avoid you some simple and silly errors, that can lead to troubles. All Tasks in Microsoft Security Code Analysis package are designed to solve a particular problem and to prevent some common mistake.

Remember that security cannot be enforced only with automated tools; nevertheless they are useful to avoid some common mistakes and are not meant to replace security audit of your code.

The first task I suggest you to look at is Credential Scanner, a simple task that searches source code for potential credentials inside files.

image

Figure 1: Credential scanner task

Modern projects, especially those designed for the cloud, use tons of sensitive data that can be mistakenly stored in source code. The easiest mistake is storing credential for databases or other services inside configuration file, like web.config for ASP.Net projects or we can left some Token for Cloud resource or services, leaving that resource unprotected.

Including Credential Scanner in your azure pipeline can save you troubles, with minimal configuration you can have it scan your source code to find credentials. All you need to do is drop the task in the pipeline, use default configuration and you are ready to go. Full details on configuring the task could be found here.

image

Figure 2: Configuration pane for Credential Scanner

Credential scan will run in your pipeline and report problem found.

image

Figure 3: Credential scanner found a match.

If you look in Figure 3: Credential scanner found a match, but the task does not make the build fails (as you could expect). This is normal behvior, because all security tasks are meant to produce an output file with scan result, and it is duty of another dedicated task to analyze all results file and make the build fail if problems are found.

It is normal to have security related tasks not to fail the build immediately, a dedicated tasks is needed to analyze ALL log files and fail the build if needed

Post Analysis task is your friends here.

image

Figure 4: Add a Post Analysis task to have the build fails if some of the security related task failed

Actually this special task allows you to specify which of the security task you want to analyze and this is the reason why the build does not fails immediately when Credential Scanner found a problem. The goal here is running ALL security related tasks, then analyze all of them and have the build fails if problems where found.

image

Figure 5: Choose which analyzer you want to use to make the build fail.

After you added this task at the end of the build, your build fails if security problems are found.

image

Figure 6: Build fails because some of the analysis found some problems. In this specific situation we have credentials in code.

As you can see from Figure 6 Credential Scan task is green and is the Security Post Analysis Task that made the build fails. It also log some information in build errors page as you can see from Figure 7.

image

Figure 7: Build fails for issues in credential scanner

Now the final question is: where can I found the csv file generated by the tool? The answer is simple, there is another special task whose purpose is upload all logs as artifacts of the build.

image

Figure 8: Simply use the PublishSecurityAnalysisLog task to have all security related logs published as artifacts.

As you can see from Figure 9 all the logs are correctly uploaded as artifacts and divided by tool type.  In this example I’ve ran only the Credential Scanner Tool so it is the only output I have in my artifacts folder.

image

Figure 9: Credential Scanner output included as artifact build.

Downloading the file you can open it with excel (I usually use csv file output for Credential Scanner) and find what’s wrong.

image

Figure 10: Csv output contains the file with the error, the number of the line but everything else is redacted out for security

As I can verify from csv output, I’ve some problem at line 9 of config.json file, time to look at the code and find the problem.

SNAGHTML7881da

Figure 11: Password included in a config file.

In CSV output file, Credential Scanner task only store file, row number and hash of the credential found, this is needed to avoid the credential leak from build output.

Now, this example was made for this post, so do not try that password against me, it will just not work :). If you think that you never fall for this silly mistake remember that noone is perfect. Even if I’m trying to avoid these kind of errors, I must admit that some years ago I was contacted by a nice guy that told me that I’ve left a valid token in one of my sample source. Shame on me, but this kind of errors could happen. Thanks to Credential Scanner you can really mitigate them.

If you wonder what kind of rules the task uses to identify password, the documentation states that

CredScan relies on a set of content searchers commonly defined in the buildsearchers.xml file. The file contains an array of XML serialized objects that represent a ContentSearcher object. The program is distributed with a set of searchers that have been well tested but it does allow you to implement your own custom searchers too.

So you can download the task, and examine the dll, but the nice aspect is that you can include your own searcher too.

If the tool find false positive and you are really sure that the match is really a false positive, you can use an exclude file as for the documentation.

image

Figure 12: Suppression rules for the task.

I must admit that Credential Scanner is really a powerful tool that should be included in every build, especially if you are developing open source code. Remember that there are lots of tools made to scavenge projects for this kind of vulnerabilities in code, so, if you publish some sensitive password or keys in open source project, it constitutes a big problem. Sooner or later this will bite you.

Gian Maria

Unable to execute .NET core unit test in VS after uninstalling older sdk

It is not uncommon to have installed many versions of .NET core framework, especially after many Visual Studio updates. Each installation consumes disk space so I’ve decided to cleanup everything leaving only major version of the framework installed in the system.

Everything worked fine, except Visual Studio Test Explorer that, upon test run request, generates this error in Tests output window

[21/11/2019 6:08:03.911 ] ========== Discovery aborted: 0 tests found (0:00:05,5002292) ==========
[21/11/2019 6:08:03.934 ] ———- Run started ———-
Testhost process exited with error: It was not possible to find any compatible framework version
The framework ‘Microsoft.NETCore.App’, version ‘2.2.0’ was not found.

 

It seems that after uninstalling old .NET core SDK Visual Studio Test runner is not anymore capable of executing tests based on .NET core 2.2

This happens to every .NET core 2.2 test project, if I update framework to netcore3.0 everything went good. After a brief check everything seems ok, but then I realized that Visual Studio is a x86 process, so it is probably using x86 version of the .NET Core SDK. Since I’ve uninstalled every version except x64 major version of every SDK (2.0, 2.1, 2.2, 3.0) I’ve actually removed all x86 version (Except 3.0) and this could be the reason.

To verify this supposition, I went to the folder C:\Program Files (x86)\dotnet\shared\Microsoft.NETCore.App where I can check every version I’ve actually installed for my system. Gotcha, only version 3.0 was present. The final proof was downloading and installing latest x86 version for .NET core 2.2, to verify if it will fix the issue, and, gotcha, VS was able to run Unit tests based on .NET core 2.2 again.

Visual Studio test runner seems to use x86 version of .NET  core SDK, so if you uninstall it (leaving only x64 version) you are not able anymore to run tests based on that .NET core version with Test Explorer.

image

Figure 1: My c:\program files\x86\dotnet\shared\Microsoft.Netcore.App folder that shows the installation of 2.2.5 x86 version of the sdk.

Gian Maria