Why I love DevOps and hate DevSecOps

DevOps is becoming a buzzword, it makes hype and everyone want to be part of it, even if he/she does not know exactly what DevOps is. One of the symptoms of this is the “DevOpsEngineer”, a title that does not fit in my head.

We could debate for days or years on the right definition of DevOps, but essentially is a cultural approach on building software focused on building the right thing with the maximum quality and satisfaction for the customer. 

Remember, DevOps is a cultural approach based on transparency and inclusion, not a set of tools/practices

In my head DevOps is nothing really different from Agile, it has just a different perspective. And I know, probably most of you are crying out loud because we had lots of guru, articles, sites telling you the difference from Agile and DevOps, but I simply do not care. If you think that DevOps is about continuous deployment and only tools and practices, you are probably wrong. I can admit that tools and practices can be a backbone of DevOps culture, everything should start with culture, collaboration, inclusion and transparency, tools and practices come later in the game.

What I care, as a professional that gets paid to create software, is the satisfaction of the Customer, because it brings me more work and makes me proud of my work, after all, in this industry, we all love our works. I read “The Goal” lots of years ago, and it is still so actual, the Theory of Constraint is still actual, even in software. In my mind DevOps is just another tentative of changing the approach of making software for the good of the team, ops, Customer and users.

Given that, what is a DevOps Engineer? Giving DevOps prefixed roles in a DevOps culture is really bad, because every person of the team is part of DevOps: Customer + Developers + Operationals.

DevOps culture permeates work environment and we do not need DevOps XXX roles, everyone is part of DevOps culture and if you urge the need to find a DevOps Engineer it just means that other members in the team are out of DevOps culture. A DevOps XXX is just a patch in your culture problem and it will not just work.

Given that, since I love DevOps I hate every DevXXXOps, because there is nothing more to add to DevOps.

This is why I hate with all myself DevSecOps; since DevOps is now a buzzword as Security, why not to create a super buzzword like DevSecOps?

Let me be crystal clear, if you claim that your organization has a DevOps culture and you do not care about security, you are doing it dead wrong. Security is paramount, it should be part of every professional / practice / culture and it should permeate every part of software lifecycle. If you think that you need to add Sec to DevOps it just means that you are not caring about security in your culture, and this is a HUGE problem that should be address before bringing new Buzzword into the game.

Gian Maria.

Check for Malware in a Azure DevOps Pipeline

In a previous post I’ve showed Credential Scanner, a special task part of Microsoft Security Code Analysis available in Azure, today I want to have a quick peek at Anti Malware scanner task.

First of all a simple consideration: I’ve been asked several times if there is any need to have an AntiVirus or AntiMalware tools in build machines, after all the code that is build is developed by own developer, so there should be no need of such tools, right? In my opinion this is a false assumption, here is some quick consideration on how a malware can be downloaded in your build machine

1) If you build open source code where others can contribute and if there is no constant analysis of code, it is simple for a malicious user to modify a script or yaml build to do something nasty to your build machine. Ok, this is a really an edge case, but think at an angry employee that got fired and want to damage the company….

2) Nuget, Npm and in general every package manager download stuff from internet, this should be enough to justify keeeping an AntiVirus on your build agent. Some npm package you are using can be hijacked to download everything, and, generally speaking, everything can go south if you download stuff from the internet. I know that npm and nuget probably does some check of packages, but there is no real formal approval process, so I think that noone can guarantee that everything that comes from nuget, npm or the like is safe.

3) Custom Task in azure devops are also downloaded from the server, but in this situation the risk can be mitigated, because Microsoft checks product that are in marketplace.

In my opinion, since a build agent will download stuff from internet and executes scripts made by humans, it is better to have a good security solution constantly monitoring Agent Working folder

Ok, point 2 is the real risk, to mitigate it, the only solution is to point to a private Nuget or Npm repositories, and double check every package that you allow from nuget.org or npm main repository. The goal is: before a new version of a library is allowed to be used, someone should check if there are no risks. Npm is especially annoying, because an NPM install usually automatically updates libraries, this is why on a build you should always prefer npm ci instead of npm install.

Generally speaking, in my opinion it is better to have an antivirus on your build machine, and be 100% sure that agent folder and agent folder is constantly monitored.

To add an extra level of security, I’d also like to have a report in my build that certify that output of the build is safe, welcome Microsoft Malware Scanner Analyzer. This is another task part of the Microsoft Security Code Analysis whose purpose is scanning a specific folder and report analysis in the build.

Task configuration is quite simple, you can usually leave all default configurations, and you are ready to go.

image

Figure 1: Configuration of Malware scanner Task

The only real parameter you want to configure is the path you want to scan, usually is the artifacts directory, so you are confident that the output of the build that will be uploaded to the service is Malware free. Having another AntiVirus as I told before gives you double security, because standard antivirus kicks in automatically, and this task will do another check and upload result on the build.

image

Figure 2: Output of Malware scanner in build output

Call me paranoid, but I really like having an assurance that my artifacts are secure. I perfectly know that this is no 100% assumption that everything is good, but it is a good part to start.

Another nice aspect of the tool is that the output of the scan is also included as an artifacts.

image

Figure 3: Anti malware scanner log uploaded as artifacts.

This allows for everyone that downloads artifacts for installation to check output of the scanner.

Remember, when it is time for security, having a double check is better than have a single check.

Gian Maria

BruteForcing login with Hydra

Without any doubt, Hydra is one of the best tool to bruteforce passwords. It has support for many protocols, but it can be used with standard web sites as well forcing a standard POST based login. The syntax is a little bit different from a normal scan, like SSH and is similar to this cmdline.

./hydra -l username -P x:\temp\rockyou.txt hostname –s port http-post-form “/loginpage-address:user=^USER^&password=^PASS^:Invalid password!”

Dissecting the parameters you have

-l: specify a username you want to try, you can also specify a file containing all the username you want to try
-P: specify a file with all the password you want to try, rockyou.txt is a notable standard
-s: service, it should be the port the site is listening to

After these three parameters it comes the part needed to select the site and the payload you want to sent to the site. You start with http-post-form to specify that you want POST request with form urlencoded, followed by a special string composed by three parts separated by semicolon.

The first part is the page that will be called, the second part is the payload, with ^USER^ and ^PASS^ placeholder that will be substituted by Hydra at each tentative, finally, the last part is the text that hydra should look to understand if access is denied. Once launched it will try to bruteforce the password with tremendous speed.

image

Figure 1: Hydra in action

As you can see it works perfectly also on Windows.

Gian Maria.

Security in 2019, still unprotected ElasticSearch instance exists

I’ve received today a notification from https://haveibeenpwned.com/ because one of my emails was present in a data breach.

image

Ok, it happens, but two things disturbed me, the first is that I really never heard of those guys (People Data Labs), this because they are one of the companies that harvest public data from online sources, aggregates them and re-sell as “Data enrichment”. This means that they probably have only public data on me. If you are interested you can read article by Troy Hunt https://www.troyhunt.com/data-enrichment-people-data-labs-and-another-622m-email-addresses/ on details about this breach.

But the second, and more disturbing issue is that, in 2019, still people left ElasticSearch open and unprotected in the wild. This demonstrates really low attention about security, especially in situation where you have Elasticsearch on server that have a public exposure. It is really sad to see that Security is still a second citizen in software development, if not, such trivial errors would not be done.

Leaving ElasticSearch unprotected binded to a public IP address is sign of zero attention to security.

If you have ElasticSearch in a server with public access (such as machines in the cloud) you should:

  • 1) Pay for Authentication Module so you can secure your instance or, at least, use some open source module for basic auth). No Elasticsearch that is installed in a machine with a public access can be left without auth.
  • 2) Bind ElasticSearch instance on private ip address only (the same used by all legitimate machine that should contact ES), be 100% sure that it does not listen on public address.
  • 3) add firewall rule to explicitly close port 9200 for public networks (in case someone messed with rule 2)
  • 4) Open port 9200 only for internal ip that are legitimate to access Elastic Search

Finally you should check with some automatic tool if, for any reason, your ES instance starts responding on port 9200 in public ip address, to verify if someone messed with the aforementioned rules.

Gian Maria.

GitHub security Alerts

I really love everything about security and I’m really intrigued by GitHub security tab that is now present on you repository. In your project usually it is disabled by default.

image

Figure 1: GitHub Security tab on your repository

If you enable it you start receiving suggestion based on code that you check in on the repository, as an example, GitHub will scan your npm packages source to find dependencies with libraries that are insecure.

When GitHub found something that require your attention, it will put a nice warning header on your project, so the alert cannot really pass unnoticed.

image

Figure 2: Security alert warning banner

If you go to the security tab you got a detailed list of the analysis, so you can put a remediation plan in motion, or you can simply dismiss if you believe that you can live with them.

image

Figure 3: Summary of security issues for the repository

Clearly you can click on any issue to have a detailed description of the vulnerability, so you can decide if you are going to fix it or simple dismiss because that issue is not relevant to you or you cannot in anyway bypass the problem.

image

Figure 4: Detailed report of security issue

If you noticed in Figure 4, you have also a nice button “Create Automated Security Fix” in the upper right part of the page, this means that not only GitHub is telling me where the vulnerability is, it sometimes can fix the code for me. Pressing the button will simply create a new Pull Request to fix that error, how nice.

image

Figure 5: Pull request with the fix for the security issue

In this specific situation it is simply a vulnerable package that is donwloaded by npm install, the change is simply bumping a library to a version that removed this vulnerability.

Actually GitHub perform a security scan on project dependencies and can present a remediation simply with nice pull requests

Using Pull request is really nice, really in the spirit of GitHub. The overall experience is really nice, the only annoying stuff is that actually the analysis seems to be done on master branch and proposed solution creates pull requests for master branch. While this is perfectly fine, the only problem I have is that, closing that pull request from the UI, it will merge this commit on the master branch, effectively bypassing GitFlow flow.

Since I’m a big fan of command line, I prefer to close that Pull request manually, so I simply issue a fetch, identify the new branch (it has annoying long name Smile) and simply checkout it as an hotfix branch

$ git checkout -b hotfix/0.3.1 remotes/origin/dependabot/npm_and_yarn/CoreBasicSample/src/MyWonderfulApp.Service/UI/tar-2.2.2
Switched to a new branch 'hotfix/0.3.1'
Branch 'hotfix/0.3.1' set up to track remote branch 'dependabot/npm_and_yarn/CoreBasicSample/src/MyWonderfulApp.Service/UI/tar-2.2.2' from 'origin'.

With this commands I simply checkout the remote branch as hotfix/0.3.1, so I can simply issue a git flow hotfix finish and pushing everything back to the repository.

If you have a specific flow for hotfixes, like GitFlow, it is quite easy closing Pull Requests locally, following your process, GitHub will automatically detect that the PR is closed after the push.

Now branch is correctly merged

image

Figure 6: Pull request manually merged.

If you really like this process, you can simply ask GitHub to automatically create pull requests without your intervention. As soon as a security fix is present, a PR will be created.

image

Figure 7: You can ask to receive automated pull request for all vulnerabilities

Et voilà, it is raining pull requests

image

Figure 8: A series of Pull requests made to resolve security risks

This raise another little issue, we have a single PR for each vulnerability, so, if I want to apply all of them in a unique big hotfix, I only need to manually start the hotfix, then fetch all those branches from the repo and finally cherry-pick all the commits. This operation is easy because each Pull Request contains a single commit that fixes a single vulnerability issue. Sequence of command is:

git flow hotifx start 0.3.2
git cherry-pick commit1
git cherry-pick commit2
git cherry-pick commit3
git flow hotfix finish

Final result is an hotfix resulted from cherry-picking of three distinct PR.

image

Figure 9: Three of pull requests were closed using simple cherry-pick

GitHub is really good in understanding that I’ve cherry-picked all commits in yellow from pull requests, because all pull requests were automatically closed after the push.

image

Figure 10: My pull requests are correctly closed even if I cherry-picked all commits manually.

Actually this functionality is really nice, in this simple repository I have really few lines of code but it helped me revealing some npm dependencies with vulnerabilities and, most important, it gave me the solution so I can immediately put a remediation in place.

Gian Maria.