Configure a VSTS Linux agent with docker in minutes

It is really simple to create a build agent for VSTS that runs in Linux and is capable of building and packaging your DotNetCore project, I’ve explained everything in a previous post, but I want to remind you that, with docker, the whole process is really simple.

Anyone knows that setting up a build machine often takes time. VSTS makes it super simple to install the Agent , just download a zip, call a script to configure the agent and the game is done. But this is only one side of the story. Once the agent is up, if you fire a build, it will fail if you did not install all the tools to compile your project (.NET Framework) and often you need to install the whole Visual Studio environment because you have specific dependencies. I have also code that needs MongoDB and Sql Server to run tests against those two databases, this will usually require more manual work to setup everything.

In this situation Docker is your lifesaver, because it allowed me to setup a build agent in linux in less than one minute.

Here are the steps: first of all unit tests use an Environment Variable to grab the connection string to Mongodb, MsSql and every external service they need. This is a key part, because each build agent can setup those environment variable to point to the right server. You can think that 99% of the time the connection are something like mongodb://localhost:27017/, because the build agent usually have mongodb installed locally to speedup the tests, but you cannot be sure so it is better to leave to each agent the ability to change those variables.

With this prerequisite, I installed a simple Ubuntu machine and then install Docker . Once Docker is up and running I just fire up three Docker environment, first one is the mongo database

sudo docker run -d -p 27017:27017 --restart unless-stopped --name mongommapv1 mongo

Than, thanks to Microsoft, I can run Sql Server in linux in a container, here is the second Docker container to run MSSQL

sudo docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=my_password' -p 1433:1433 --name msssql --restart=unless-stopped -d microsoft/mssql-server-linux

This will start a container with Microsoft Sql Server, listening on standard port 1433 and with sa user and password my_password. Finally I start the docker agent for VSTS

sudo docker run \
  -e VSTS_ACCOUNT=prxm \
  -e VSTS_TOKEN=xxx\
  -e TEST_MONGODB=mongodb://172.17.0.1 \
  -e TEST_MSSQL='Server=172.17.0.1;user id=sa;password=my_password' \
  -e VSTS_AGENT='schismatrix' \
  -e VSTS_POOL=linux \
  --restart unless-stopped \
  --name vsts-agent-prxm \
  -it microsoft/vsts-agent

Thanks to the –e option I can specify any environment variable I want, this allows me to specify TEST_MSSQL and TEST_MONGODB variables for the third docker container, the VSTS Agent. The ip of mongodb and MSSql are on a special interface called docker0, that is a virtual network interfaces shared by docker containers.

image

Figure 1: configuration of docker0 interface on the host machine

Since I’ve configured the container to bridge mongo and SQL port on the same port of the host, I can access MongoDB and MSSQL directly using the docker0 interface ip address of the host. You can use docker inspect to know the exact ip of the docker container on this subnet but you can just use the ip of the host.

image

Figure 2: Connecting to mongodb instance

With just three lines of code my agent is up and running and is capable of executing build that require external databases engine to verify the code.

This is the perfect technique to spinup a new build server in minutes (except the time needed for my network to download Docker images 🙂 ) with few lines of code and on a machine that has no UI (clearly you want to do a minimum linux installation to have only the thing you need).

Gian Maria.

Pause build and clear long build queue

In VSTS / TFS Build system, you can change the status of the build, between three states: Enabled, Paused and Disabled. The Paused state is really special, because all the build trigger are still active and builds are queued, but all these queued build does not starts.

image

Figure 1: Paused build

Paused state should be used with great care, because if you forget a build in this state, you can end up with lots of queued build, as you can see in Figure 2:

image

Figure 2: Really high number of build queued, because the build definition is paused.

What happened in Figure 2 is that some user probably set the build to paused, believing that no build will be queued, after some week he want to re-enabled, but we have 172 build in queue.

Now if you are in a situation like this, you probably need to remove all queued builds before re-enable the build definition. If you set the build to active you have the risk to completely saturate your build queue. To solve this problem, just go to the queued tab of the build page.

image

Figure 3: Tab queued for the build page

From this page you can filter and show only queued build for the definition that was paused, you can then select all queued builds, and then delete all scheduling at once. Thanks to the filtering abilities present in the queued tab, you can quickly identify queued build and do massive operation on them.

Now that we deleted all the 172 queued build, we re-enabled the build without the risk of saturating build queue.

Gian Maria.

Dotnetcore, CI, Linux and VSTS

If you have a dotnetcore project, it is a good idea to setup continuous integration on a Linux machine. This will guarantee that the solution actually compiles correctly and all the tests run perfectly, even in  Linux environment. If you are 100% sure that, if a dotnetcore project runs fine under Windows it should run fine under Linux, you will have some interesting surprises. The first and trivial difference is that Linux filesystem is case sensitive.

If you use dotnetcore, it is  always a good idea to immediately setup a Build against a Linux environment to ensure portability.

I’m starting creating a dedicated pool for Linux machines. Actually having a dedicated pool is not necessary, because the build can require Linux capability, but I’d like to start having all the Linux build agent in a single place for easier management.

image

Figure 1: Create a pool dedicated to build agents running Linux operating system

Pressing the button “download agent” you are prompted with a nice UI that explain in a really easy way how you should deploy your agent under your linux machine.

image

Figure 2: You can easily download the agent from VSTS / TFS web interface

Instruction are detailed, and it is really easy to start your agent in this way: just running a configure shell script and then you can run the agent with another run.sh shell script.

There is also another interesting approach, you can give a shot to the official docker image that you can find here: https://github.com/Microsoft/vsts-agent-docker. The only thing I need to do is running the docker image with the command.

sudo docker run   -e VSTS_ACCOUNT=prxm -d -e VSTS_TOKEN=my_PAT_TOKEN-e VSTS_AGENT='schismatrix' -e VSTS_POOL='Linux' -it microsoft/vsts-agent

Please be patient on the first run because the download can take a little bit, the docker image is pretty big, so you need to patiently wait for the download to finish. Once the docker image is running, you should verify with sudo docker ps that the image is really running fine and you should check on the Agent Pool page if the agent is really connected. The drawback of this approach is that currently only Ubuntu is supported with Docker, but the situation will surely change in the future.

Docker is surely the most simple way to run a VSTS / TFS linux build agent.

Another things to pay attention is running the image with the –d option, because whenever you create a new instance of vsts agent from the docker base image, the image will download the latest agent and this imply that you need to wait a decent amount of time before the agent is up and running, especially if you, like me, are on a standard ADSL connection with max download speed of 5 Mbps.

image

Figure 3: Without the –d option, the image will run interactively and you need to wait for the agent to be downloaded

As you can see from the image, running a new docker instance starts from the base docker image, contacts the VSTS server and download and install the latest version of the agent.

image

Figure 4: After the agent is downloaded the image automatically configure and run the agent and you are up and running.

Only when the output of the docker image states Listening for Jobs the agent should be online and usable.

image

Figure 5: Agent is alive and kicking

Another interesting behavior is that, when you press CTRL+C to stop the interactive container instance, the docker image remove the agent from the server, avoiding the risk to left orphan registration in your VSTS Server.

image

Figure 6: When you stop docker image, the agent is de-registered to avoid orphan agents registration.

Please remember that, whenever you stop the container with CTRL+C, the container will stop, and when you will restart it, it will download again the VSTS agent.

This happens because, whenever the container stop and run again, it need to redownload everything that is not included in the state of the container itself. This is usually not a big problem, and I need to admit that this little nuance is overcome by the tremendous simplicity you have, just run a container and you have your agent up and running, with latest version of dotnetcore (2.0) and other goodness.

The real only drawback of this approach, is that you have little control on what is available on the image. As an example, if you need to have some special software installed in the build machine, probably you need to fork the container and configure the docker machine for your need.

Once everything is up and running (docker or manual run.sh) just fire a build and watch it to be executed in your linux machine.

image

Figure 7: Build with tests executed in Linux machine.

Gian Maria

Check Angular AoT with a TFS Build

Developing with Angular is a real fun, but usually during development you serve the application without any optimization, mainly because you want to speedup the compilation and serving of your Angular application.

When it is time to release the software, usually you build with –prod switch and usually you also use the –aot switch (it seems to me that it is on by default on –prod in latest version of the ng compiler). The aot switch enable the Ahead of Time compiling, that is used to speedup your deployed application and also it find some more error than a normal compilation.

I do not want to made examples when a simple ng build compile just fine but the production build with aot fails, the important concept is, when you fire your production build to generate install packages, it is really annoying to see your angular build fails because of some errors. Since building with aot is really expensive I’ve decided not to trigger this build for each commit, but only for special situations. First of all, here is the whole build.

image

Figure 1: Build to check aot compilation

As you can see, with TFS Build it is really a breeze to create a build that simply trigger the ng compiler to verify that it correctly compiles with aot switch on. Thanks to the trigger panel of my build, I’ve decided to build automatically only master and release branches.

image

Figure 2: Trigger configuration for my AOT build

This is another great advantage of Git over a traditional source control system, I can configure a single build for every branch on my repository. In this specific situation I require that the build will be triggered for release and master branches.

This is usually a good approach, but I want to move a little bit further so I configure that build as optional for pull request for develop branch. With this technique, whenever a pull request is done on the develop branch, that build is triggered, and people can fix errors before they slips to the main develop branch.

Thanks to VSTS build system, it is just a matter of minutes to setup specific build that checks specific part of your code, like angular production compilation and optimization.

Gian Maria.

How to security expose my test TFS on the internet

I’m not a security expert, but I have a basic knowledge on the argument, so when it is time to expose my test TFS on the outside world I took some precautions. First of all this is a test TFS instance that is running in my test network, it is not a production instance and I need to access it only sometimes when I’m outside my network.

Instead of mapping 8080 port on my firewall I’ve deployed a Linux machine, enabled SSH and added google two factor authentication, then I expose port 22 on another external port. Thanks to this, the only port that is exposed on my router is a port that remap on port 22 on my linux instance.

Now when I’m on external network, I use putty to connect in SSH to that machine, and I setup tunneling as for Figure 1.

image

Figure 1: Tunneling to access my TFS machine

Tunneling allows me to remap the 8080 port of the 10.0.0.116 machine (my local tfs) on my local 8080 port. Now from a machine external on my network I can login to that linux machine.

image

Figure 2: Login screen with verification code.

This is on a raspberry linux pi, I simply use pi as username, then use verification code from my cellphone (google authenticator app) and finally the password of my account.

Once I’m connected to the raspberry machine I can simply browse http://localhost:8080 and everything is redirected through a secure SSH tunnel to the 10.0.0.116 machine. Et voilà I can access any machine, any port in my network just using SSH tunneling.

image

Figure 3: My local TFS instance now accessible from external machine

This is surely not a tutorial on how to expose a production TFS instance (please use https), but instead is a simple tutorial on how you can access every machine in your local lab, without the need to expose directly the port on your home router. If you are a security expert you will probably find flaws in this approach, but surely it is better than directly map ports on the router.

Gian Maria.