Multiline PowerShell on YAML pipeline

Sometimes having a few lines of PowerShell in your pipeline is the only thing you need to quickly customize a build without using a custom task or having a PowerShell file in source code. One of the typical situation is: write a file with some content that needs to be determined by a PowerShell script, in my situation I need to create a configuration file based on some build variable.

Since using standard graphical editor to put a PowerShell task and then grab the YAML with the “View YAML” button is the quickest way to do this, you need to be warned because you can incur in the following error.

can not read a block mapping entry; a multiline key may not be an implicit key

This error happens when you put multiline text inside a YAML file with bad indentation of a multiline string. Inline PowerShell task comes really in hand but you need to do special attention because “View YAML” button in the UI sometimes generates bad YAML.

In Figure 1 You can verify what happens when I copy and paste a YAML task using the “View YAML” button of standard graphical editor and paste into a YAML build. In this situation the editor immediately shows me that the syntax is wrong. The real problem here is that, using Visual Studio Code with Azure Pipelines extension did not catch the error, and you have a failing build.


Figure 1: Wrong YAML syntax due to multiline PowerShell command line

It turns out that the View YAML button of classic graphical editor misses an extra tab needed to the content of PowerShell, the above task should be fixed in this way:


Figure 2: Correct syntax to include a multiline script

If you want to include an inline PowerShell script,  most of the time you do not want to limit yourself to a single line and you need to use the multiline string syntax. Just use a pipe character (|) followed by a multiline string where each newline will be replaced by regular \n. The important rule is: the string has an extra tab respect the line that initiate the multiline string. This fact was highlighted in Figure 2. The tab is important because YAML parser will consider the string finished when it encounter a new line with a one tab less than the multiline string.

The pipe symbol at the end of a line indicates that any indented text that follows is a single multiline string. See the YAML spec – Literal styles.

This is another reason to use online editor for YAML build, because, as you can see in Figure 1, it is able to immediately spot errors in syntax.

Gian Maria

Azure DevOps multi stage pipeline environments

In a previous post on releasing with Multi Stage Pipeline and YAML code I briefly introduced the concept of environments. In that example I used an environment called single_env and you can be surprised that, by default, an environment is automatically created when the release runs.

This happens because an environment can be seen as sets of resources used as target for deployments, but in the actual preview version, in Azure DevOps, you can only add Kubernetes resources. The question is: why have I used an environment to deploy an application to Azure if there is no connection between the environment and your azure resources?

At this stage of the preview, we can only connect Kubernetes to an environment, no other physical resource can be linked.

I have two answer for this, the first is: Multi State Release pipeline in YAML is still in preview and we know that it is still incomplete, the other is: an environment is a set of information and rules, and rules can be enforced even if there is no a direct connection with physical resources.


Figure 1: Environment in Azure DevOps

As you see in Figure 1 a first advantage of environment is that I can immediately check its status. From the above picture I can immediately spot that I have a release successfully deployed on it. Clicking on the status opens pipeline details released on that environment.

Clicking on Environment name opens environment detail page, where you can view all information for the environment (name, and all release made on it) and it is possible to add resources and manage Security and Checks.


Figure 2: Security and Checks configuration for an environment.

Security is pretty straightforward, it is used to decide who can use and modify the environment, the real cool feature is the ability to create checks. If you click checks in Figure 2 you are redirected on a page that lists all the checks that need to be done before the environment can be used as a target for deploy.


Figure 3: Creating a check for the environment.

As an example I created a simple manual approval, put myself as the only approver and add some instruction. Once a check was created, it is listed in the Checks list for the environment.


Figure 4: Checks defined for the environment single_env

If I trigger another run, something interesting happens: after the build_test phase was completed , deploy stage is blocked by approval check.


Figure 4: Deploy stage was suspended because related environment has check to be fulfilled

Check support in environment can be used to apply deployment gates for my environments, like manual approval for the standard Azure DevOps classic release pipeline

Even if there is no physical link between the environment and my azure account where I’m deploying my application, azure pipeline detects that the environment has a check and block the execution of the script, as you can check in Figure 4.

Clicking on the check link in Figure 4 opens a details with all checks that should be done before continuing with deploy script. In Figure 5 you can check that the deploy is waiting for me to approve it, I can simply press the Approve button to have deploy script to start.


Figure 5: Checks for deploy stage

Once an agent is available, deploy stage can now start because I’ve done all check for related environment.


Figure 6: Deploy stage started, all the check are passed.

Once deploy operation finished, I can always verify checks, in Figure 7 I can verify how easy is to find who approved the release in that environment.


Figure 7: Verifying the checks for deploy after pipeline finished.

Actually the only check available is manual approval, but I’m expecting more and more checks to be available in the future, so keeps an eye to future release notes.

Gian Maria.

GitHub security Alerts

I really love everything about security and I’m really intrigued by GitHub security tab that is now present on you repository. In your project usually it is disabled by default.


Figure 1: GitHub Security tab on your repository

If you enable it you start receiving suggestion based on code that you check in on the repository, as an example, GitHub will scan your npm packages source to find dependencies with libraries that are insecure.

When GitHub found something that require your attention, it will put a nice warning header on your project, so the alert cannot really pass unnoticed.


Figure 2: Security alert warning banner

If you go to the security tab you got a detailed list of the analysis, so you can put a remediation plan in motion, or you can simply dismiss if you believe that you can live with them.


Figure 3: Summary of security issues for the repository

Clearly you can click on any issue to have a detailed description of the vulnerability, so you can decide if you are going to fix it or simple dismiss because that issue is not relevant to you or you cannot in anyway bypass the problem.


Figure 4: Detailed report of security issue

If you noticed in Figure 4, you have also a nice button “Create Automated Security Fix” in the upper right part of the page, this means that not only GitHub is telling me where the vulnerability is, it sometimes can fix the code for me. Pressing the button will simply create a new Pull Request to fix that error, how nice.


Figure 5: Pull request with the fix for the security issue

In this specific situation it is simply a vulnerable package that is donwloaded by npm install, the change is simply bumping a library to a version that removed this vulnerability.

Actually GitHub perform a security scan on project dependencies and can present a remediation simply with nice pull requests

Using Pull request is really nice, really in the spirit of GitHub. The overall experience is really nice, the only annoying stuff is that actually the analysis seems to be done on master branch and proposed solution creates pull requests for master branch. While this is perfectly fine, the only problem I have is that, closing that pull request from the UI, it will merge this commit on the master branch, effectively bypassing GitFlow flow.

Since I’m a big fan of command line, I prefer to close that Pull request manually, so I simply issue a fetch, identify the new branch (it has annoying long name Smile) and simply checkout it as an hotfix branch

$ git checkout -b hotfix/0.3.1 remotes/origin/dependabot/npm_and_yarn/CoreBasicSample/src/MyWonderfulApp.Service/UI/tar-2.2.2
Switched to a new branch 'hotfix/0.3.1'
Branch 'hotfix/0.3.1' set up to track remote branch 'dependabot/npm_and_yarn/CoreBasicSample/src/MyWonderfulApp.Service/UI/tar-2.2.2' from 'origin'.

With this commands I simply checkout the remote branch as hotfix/0.3.1, so I can simply issue a git flow hotfix finish and pushing everything back to the repository.

If you have a specific flow for hotfixes, like GitFlow, it is quite easy closing Pull Requests locally, following your process, GitHub will automatically detect that the PR is closed after the push.

Now branch is correctly merged


Figure 6: Pull request manually merged.

If you really like this process, you can simply ask GitHub to automatically create pull requests without your intervention. As soon as a security fix is present, a PR will be created.


Figure 7: You can ask to receive automated pull request for all vulnerabilities

Et voilà, it is raining pull requests


Figure 8: A series of Pull requests made to resolve security risks

This raise another little issue, we have a single PR for each vulnerability, so, if I want to apply all of them in a unique big hotfix, I only need to manually start the hotfix, then fetch all those branches from the repo and finally cherry-pick all the commits. This operation is easy because each Pull Request contains a single commit that fixes a single vulnerability issue. Sequence of command is:

git flow hotifx start 0.3.2
git cherry-pick commit1
git cherry-pick commit2
git cherry-pick commit3
git flow hotfix finish

Final result is an hotfix resulted from cherry-picking of three distinct PR.


Figure 9: Three of pull requests were closed using simple cherry-pick

GitHub is really good in understanding that I’ve cherry-picked all commits in yellow from pull requests, because all pull requests were automatically closed after the push.


Figure 10: My pull requests are correctly closed even if I cherry-picked all commits manually.

Actually this functionality is really nice, in this simple repository I have really few lines of code but it helped me revealing some npm dependencies with vulnerabilities and, most important, it gave me the solution so I can immediately put a remediation in place.

Gian Maria.

Release app with Azure DevOps Multi Stage Pipeline

MultiStage pipelines are still in preview on Azure DevOps, but it is time to experiment with real build-release pipeline, to taste the news. The Biggest limit at this moment is that you can use Multi Stage to deploy in Kubernetes or in the cloud, but there is not support for agent in VM (like standard release engine). This support will be added in the upcoming months but if you use azure or kubernetes as a target you can already use it.

My sample solution is in GitHub, it contains a real basic Asp.NET core project that contains some basic REST API and a really simple angular application. On of the advantage of having everything in the repository is that you can simply fork my repository and make experiment.

Thanks to Multi Stage Pipeline we finally can have build-test-release process directly expressed in source code.

First of all you need to enable MultiStage Pipeline for your account in the Preview Features, clicking on your user icon in the upper right part of the page.


Figure 1: Enable MultiStage Pipeline with the Preview Features option for your user

Once MultiStage Pipeline is enables, all I need to do is to create a nice release file to deploy my app in azure. The complete file is here and I will highlight the most important part here. This is the starting part.


Figure 2: First part of the pipeline

One of the core differences from a standard pipeline file is the structure of jobs, after trigger and variables, instead of directly having jobs, we got a stages section, followed by a list of stages that in turns contains jobs. In this example the first stage is called build_test, it contains all the jobs to build my solution, run some tests and compile Angular application. Inside a single stage we can have more than one job and in this particular pipeline I divided the build_test phase in two sub jobs, the first is devoted to build ASP.NET core app, the other will build the Angular application.


Figure 3: Second job of first stage, building angular app.

This part should be familiar to everyone that is used to YAML pipeline, because it is, indeed, a standard sequences of jobs; the only difference is that we put them under a stage. The convenient aspect of having two distinct jobs, is that they can be run in parallel, reducing overall compilation time.

If you have groups of taks that are completely unrelated, it is probably bettere to divide in multiple jobs and have them running in parallel.

The second stage is much more interesting, because it contains a completely different type of job, called deployment, used to deploy my application.


Figure 4: Second stage, used to deploy the application

The dependsOn section is needed to specify that this stage can run only after build_test stage is finished. Then it starts jobs section that contains a single deployment job. This is a special type of job where you can specify the pool, name of an environment and then a strategy of deploy; in this example I choose the simplest, a run once strategy composed by a list of standard tasks.

If you ask yourself what is the meaning of environment parameter, I’ll cover it in much extension on a future post, for this example just ignore it, and consider it as a way to give a name to the environment you are deploying.

MultiStage pipeline introduced a new job type called deployment, used to perform deployment of your application

All child steps of deployment job are standard tasks used in standard release, the only limitation of this version is that they run on the agent, you cannot run on machine inside environment (you cannot add anything else than kubernetes cluster to an environment today).

The nice aspect is that, since this stage depends on build_test, when deployment section runs, it automatically download artifacts produced by previous stage and place them in folder $(Pipeline.Workspace) followed by another subdirectory that has the name of the artifacts itself. This solves the need to transfer artifact of the first stage (build and test) to deployment stage


Figure 5: Steps for deploying my site to azure.

Deploying the site is really simple, I just unzip asp.NET website to a subdirectory called FullSite, then copy all angular compiled file in www folder and finally use a standard AzureRmWebAppDeployment to deploy my site to my azure website.

Running the pipeline shows you a different user interface than a standard build, clearly showing the result for each distinct stage.


Figure 6: Result of a multi stage pipeline has a different User Interface

I really appreciate this nice graphical representation of how the stage are related. For this example the structure is is really simple (two sequential steps), but it shows clearly the flow of deployment and it is invaluable for most complex scenario. If you click on Jobs you will have the standard view, where all the jobs are listed in chronological order, with the Stage column that allows you to identify in which stage the job was run.


Figure 7: Result of the multi stage pipeline in jobs view

All the rest of the pipeline is pretty much the same of a standard pipeline, the only notable difference is that you need to use the stage view to download artifacts, because each stage has its own artifacts.


Figure 8: Downloading artifacts is possible only in stages view, because each stage has its own artifacs.

Another nice aspect is that you can simply rerun each stage, useful is some special situation (like when your site is corrupted and you want to redeploy without rebuilding everything)

Now I only need to check if my sites was deployed correctly and … voilà everything worked as expected, my site is up and running.


Figure 9: Interface of my really simple sample app

Even if MultiStage pipeline is still in preview, if you need to deploy to azure or kubernetes it can be used without problem, the real limitation of actual implementation is the inability to deploy with agents inside VM, a real must have if you have on-premise environment.

On the next post I’ll deal a little more with Environments.

Gian Maria.

Vulnhub Tr0ll3 walktrough

I’ve some time to spend to have fun trying to hack the third machine of Tr0ll series, this time when I issue an nmap I’m disappointed, because I have only ssh port opened.


After some tentative with hydra and some set of passwords I felt really stupid, because the instruction on machine told exactly to start:here, so the user is start and the password is here. (next time better reading instructions)


After login I have a copule of directories, redpill contains a file with a link to a troll site :), in the very style of Tr0ll series.


The other directory is called redpill and contains a file with password for user Step2, but after some faulty attempts, I decided that this is another troll, it is not the right password for user step2, so I need to find another way.


Ok, so I got to the root folder and list all the files that are on the machine, I found something interesting.


A directory called .hints, with tons of other directories inside Smile:) typical of troll machine. The author really wants my Tab button to stop working Open-mouthed smile:D.


File gold_star.txt seems to contains lots of strings, they are no hash, they are no typical encoded strings, it could be that they are password??? Trying hydra to verify if the file contains the password for the start2 user I got immediately a dead end, the gold_star.txt file is 37 Mb, it contains TONS of strings and I cannot use to brute force the SSH.


The only option left here is, since I’m logged with a low privilege user, trying to escalate privilege. If you start looking in the internet, searching for script that allows you to elevate privilege you find tons of material. One of the first is I stumbled into is this one that, as many other sites, suggests me a script called that should helps me to inspect the system to find useful information. I downloaded it, used scp to transfer to tr0ll machine, give execution right and launched it. Since it does lots of output in the console and I need to search and examine the output, I can simply redirect the output

./ –t > lineum.txt

Resulting text file has colors (if you launch without redirect you got colored output) since coloring helps visualizing things, I used aha program to convert aansi text colored file to html.


And voilà, I have some nice and formatted output that I can read and search in browser easily on my kali machine.


This script find lots of information and it is quite long to read. After some time spent examining the file I want to understand if running this script could have me helped to find the gold file (instead of manual enumeration), and whoa, I found another interesting file, a wireshark capture file.


Ok, definitely this is Tr0ll VM style, I was pretty sure that the capture contains some interesting data, but when I opened it, there is nothing useful, everything is wireless encrypted traffic plus some standard handshaking (everything was listed as 802.11 protocol).


After monkeying with the file,  I did not find anything useful, so I decided to return to the lineum output to investigate it with more care. A thing that ringed a bell in my mind is looking at the list of last users that logged in into the system.


Wireshark file is named after one of the user, this could not be a coincidence, that file was probably connected to the user, but how? The only two files that I have in my mind is the cap file and the other file that seems to contains random strings, the only thing that comes in my mind is that one of the string can indeed be the key of wireless encryption. Aircrack-ng confirmed my opinions.image

Thanks to wireshark for having an option to include all key for wifi IEEE 802.11 traffic, simply inserting the key allows you to decrypt the file.


But nothing interesting is there, some UDP traffic, nothing that seems really useful, following the UDP stream gives me some strings like KANNOU%N Archer C50 etc.



It seems that the udp traffic does not reveal anything interesting (just blabberish from TcpLink router). After some time of puzzling my head I re-considered the fact that pcap file has the very same name of one of the user of the system (enumerated by I tried to connect with SSH and this confirmed that WPA network key is the same password used by wytshadow.

Sadly enough in user folder there is no useful information, except an executable that print continuously the very same string: iM Cr@zY L1k3 AAA LYNX.


It does not accepts inputs, so I have little use for it, but the string remembered me a reaaaaalllyyy old time in university, when we browsed the internet with LYNX. Lync is a text based web browser, used to browse from console, it was very useful when we connected in SSH to university machine and we were able to browse with LYNC using much faster university band.

At that point a bells ringed in my head: all previous tr0ll machines hide something in web servers, maybe with some of the users I have I can start apache2 or nginx, two of the most common web server in linux. It was true, this last user can start service nginx. Nmap from kali machine now reveals a new opened port: 8080, but when I browse I got a Forbidden error.


Ok, it is using some form of authentication that I need to bypass. In windows system, as an example, with IIS you can use windows authentication, where the user running the browser is the same authenticated by by web server. I must admit that I’m not so skilled in nginx, so I started poking around on all configuration files in /etc/nginx just to find the authorization method used. After some time I found a file called default (it is indeed configuration file for default site) that contains configuration for the site.image

You should not be a nginx expert to understand the above configuration, this is the site listening in port 8080, and if the user agent is different from Lynx, it will return 403. OOOHHHH now I understand the output of the oohfun file. Probably if I’ve used LYNX from the local machine I would have been authenticated.

To proceed in the spirit of hacking a web site just to fire burp suite, place an interceptor to change your user agent, and you are ready to go.


Burp suite comes with some preconfigured intercept and modify options for user agent, just grab one and change it to some string that starts with “Lynx”, like I did in the previous picture. This allowed me to enter the site, now I have a password for another user


This seems a never ending story, I login in ssh with this new user, but this time the situation was better, in home dir of genphlux user there is a file called maleus, with a nice RSA private key, 99% it is the key that allows me to connect as user maleus. After all this machine seems structured around getting one login after another, until I’ll have root login.

I download with scp the key on my kali machine and opened another terminal to connect with this other user. In this way I have multiple terminals, each one connected with a different user, just to have everything under control.

Home folder for maleus contains a dont_even_bother file, it is a 64 bit executable that asks for a password. Using strings on it, to dump all strings contained in the file, indeed seems to dump the password, it seems to easy


OK, I was trolled again


Now that I’m logged with user maleus, the first thing I did was examining all the files contained in home directory, nothing interesting except for a file called .viminfo. I really admit that I do not know what a .viminfo file does, but it contains a section named Input Line History that seems to contain the last command line of the user, and it seems to contain a password. Since this is in the home page of maleus user, probably this is the password of maleus.


I tried ssh again with user maleus with this password and indeed it is the right one. Now I wonder why the author gives me the password for current user, I already have a valid SSH key to log in as maleus user, having the password seems to be not so useful… but no! Having user password allows me to use sudo command.

After lots of not-so-useful tentatives I was puzzled, it seems that the user maleus is in the sudoers list, but it is heavily restricted in what he can do. I’ve tried lots of stuff, but it seems that nothing worked. Searching in the internet I discovered that I can issue a sudo –l command to know which command I can use with sudo and current user.



as maleus user I can run the dont_even_bother program as root. That file indeed does nothing, it is only trolling me but I can change with something more useful. Creating an executable that opens a shell is basic hacking 101, the very first thing you learn for exploiting buffer overflow: a single line of command in C and you can launch a bash.


You do not need to be a C expert, you can find the above code everywhere in the internet, it is just a simple C file with a single instruction system(“/bin/bash”) that allows you to open a shell. Since user maleus is allowed to run that specific executable as root with sudo, you need only to use gcc compiler to compile with the right name, and the game is done. Just sudo dont_even_bother and you are root


This was a fun journey too, but I found this machine slightly simpler than previous one, because in previous one you need to exploit a buffer overflow. It is indeed a very, easy, simple, 101 level buffer overflow, but it is in my opinion more advanced knowledge than simply traversing the file system to find for information.

Thanks again to the author for spending time to create these machines.

Gian Maria.