Configure a VSTS Linux agent with docker in minutes

It is really simple to create a build agent for VSTS that runs in Linux and is capable of building and packaging your DotNetCore project, I’ve explained everything in a previous post, but I want to remind you that, with docker, the whole process is really simple.

Anyone knows that setting up a build machine often takes time. VSTS makes it super simple to install the Agent , just download a zip, call a script to configure the agent and the game is done. But this is only one side of the story. Once the agent is up, if you fire a build, it will fail if you did not install all the tools to compile your project (.NET Framework) and often you need to install the whole Visual Studio environment because you have specific dependencies. I have also code that needs MongoDB and Sql Server to run tests against those two databases, this will usually require more manual work to setup everything.

In this situation Docker is your lifesaver, because it allowed me to setup a build agent in linux in less than one minute.

Here are the steps: first of all unit tests use an Environment Variable to grab the connection string to Mongodb, MsSql and every external service they need. This is a key part, because each build agent can setup those environment variable to point to the right server. You can think that 99% of the time the connection are something like mongodb://localhost:27017/, because the build agent usually have mongodb installed locally to speedup the tests, but you cannot be sure so it is better to leave to each agent the ability to change those variables.

With this prerequisite, I installed a simple Ubuntu machine and then install Docker . Once Docker is up and running I just fire up three Docker environment, first one is the mongo database

sudo docker run -d -p 27017:27017 --restart unless-stopped --name mongommapv1 mongo

Than, thanks to Microsoft, I can run Sql Server in linux in a container, here is the second Docker container to run MSSQL

sudo docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=my_password' -p 1433:1433 --name msssql --restart=unless-stopped -d microsoft/mssql-server-linux

This will start a container with Microsoft Sql Server, listening on standard port 1433 and with sa user and password my_password. Finally I start the docker agent for VSTS

sudo docker run \
  -e VSTS_ACCOUNT=prxm \
  -e VSTS_TOKEN=xxx\
  -e TEST_MONGODB=mongodb:// \
  -e TEST_MSSQL='Server=;user id=sa;password=my_password' \
  -e VSTS_AGENT='schismatrix' \
  -e VSTS_POOL=linux \
  --restart unless-stopped \
  --name vsts-agent-prxm \
  -it microsoft/vsts-agent

Thanks to the –e option I can specify any environment variable I want, this allows me to specify TEST_MSSQL and TEST_MONGODB variables for the third docker container, the VSTS Agent. The ip of mongodb and MSSql are on a special interface called docker0, that is a virtual network interfaces shared by docker containers.


Figure 1: configuration of docker0 interface on the host machine

Since I’ve configured the container to bridge mongo and SQL port on the same port of the host, I can access MongoDB and MSSQL directly using the docker0 interface ip address of the host. You can use docker inspect to know the exact ip of the docker container on this subnet but you can just use the ip of the host.


Figure 2: Connecting to mongodb instance

With just three lines of code my agent is up and running and is capable of executing build that require external databases engine to verify the code.

This is the perfect technique to spinup a new build server in minutes (except the time needed for my network to download Docker images 🙂 ) with few lines of code and on a machine that has no UI (clearly you want to do a minimum linux installation to have only the thing you need).

Gian Maria.

How to security expose my test TFS on the internet

I’m not a security expert, but I have a basic knowledge on the argument, so when it is time to expose my test TFS on the outside world I took some precautions. First of all this is a test TFS instance that is running in my test network, it is not a production instance and I need to access it only sometimes when I’m outside my network.

Instead of mapping 8080 port on my firewall I’ve deployed a Linux machine, enabled SSH and added google two factor authentication, then I expose port 22 on another external port. Thanks to this, the only port that is exposed on my router is a port that remap on port 22 on my linux instance.

Now when I’m on external network, I use putty to connect in SSH to that machine, and I setup tunneling as for Figure 1.


Figure 1: Tunneling to access my TFS machine

Tunneling allows me to remap the 8080 port of the machine (my local tfs) on my local 8080 port. Now from a machine external on my network I can login to that linux machine.


Figure 2: Login screen with verification code.

This is on a raspberry linux pi, I simply use pi as username, then use verification code from my cellphone (google authenticator app) and finally the password of my account.

Once I’m connected to the raspberry machine I can simply browse http://localhost:8080 and everything is redirected through a secure SSH tunnel to the machine. Et voilà I can access any machine, any port in my network just using SSH tunneling.


Figure 3: My local TFS instance now accessible from external machine

This is surely not a tutorial on how to expose a production TFS instance (please use https), but instead is a simple tutorial on how you can access every machine in your local lab, without the need to expose directly the port on your home router. If you are a security expert you will probably find flaws in this approach, but surely it is better than directly map ports on the router.

Gian Maria.

Using certificate for SSH in Azure Linux VM

If you like to use certificate to connect via SSH to your Linux machine you will probably use that technique to access all of your VMs, even those one hosted on Azure.

This operation is really simple, because Azure Portal allow you to specify the public key during VM creation and everything else is managed by VM Creation Scripts. In the same blade where you specify username and password you can opt in to use a certificate instead of a password. You should open the file with .pub extension you’ve created previously (with ssh-keygen) and paste full content in appropriate textbox.


Figure 1: Specifying ssh public key during VM Creation

As you can see from Figure 1 the portal will validate the key with a little green sign at the right of the textbox, informing you that the public key is valid. Once the VM is created you can use Putty or your favourite ssh client to access your machine with the certificate.

Thanks to Azure Portal you can choose to use an existing certificate to access your machine

If you already created your vm using standard username and password, you can easily connect to that machine and add public key to .ssh/authorized_keys file as showed in previous blog post, or you can use Azure CLI to configure SSH on an existing VM. First of all you need to convert the file generated with ssh-keygen in a format that can be understood by Azure CLI.

Unfortunately you cannot use the .pub file as you can when you are creating the machine;  Command Line Interface tool require a file with .pem extension. You can convert your file easily with openssl utility in a Linux VM.

openssl req -x509 -new -days 365 -key id_rsa_nopwd -out id_rsa_nopwd.pem

Thanks to this command, my RSA private key file, generated with ssh-keygen is converted to a pem file. Now you can use it to configure your VM from Azure CLI.

azure vm 
	--reset-ssh --ssh-key-file z:\Secure\Rsa\id_rsa_nopwd.pem 
	--user-name gianmaria 
	--password xxxxxx

You will be prompted for Resource Group and VM Name (you can specify those two parameter from command line), then the CLI will update your Virtual Machine for you.


Figure 2: Result of the reset-access command

Now you can access your VM using certificate, and if you check your .ssh/authorized_keys file, you can check that the public key was correctly added by the Azure CLI utility.


Figure 3: I can now connect to my VM using certificate

Gian Maria.

Where is my DNS name for Azure VM with new Resource manager?

Azure is changing management mode for resources, as you can read from this article and this is the reason why, in the new portal, you can see two different entry for some of the resources, ex: Virtual Machines.


Figure 1: Classic and new resource management in action

Since the new model has more control over resources, I’ve create a linux machine with the new model to do some testing. After the machine was created I opened the blade for the machine (machine create with the new model are visible only on the new portal) and I noticed that I have no DNS name setting.


Figure 2: Summary of my new virtual machine, computer name is not a valid DNS address

Compare Figure 2 with Figure 3, that represents the summary of a VM created with the old resource management. As you can see the computer name is a valid address in domain


Figure 3: Summary of VM created with old resource management, it has a valid DNS name.

Since these are test VM that are off most of the time, the IP is chaning at each reboot, and I really want a stable friendly name to store my connection in Putty/MremoteNG.

From the home page of the portal, you should see the resource manager group where the machine was created into. If you open it, you can see all the resources that belongs to that group. In the list you should see your virtual machine as well as the IP Address resource (point 2 in Figure 4) that can be configured to have a DNS name label. The name is optional, so it is not automatically setup for you during machine creation, but you can specify it later.


Figure 4: Managing ip addresses of network attached to the Virtual Machine

Now I set the name of my machine to to have a friendly DNS for my connection.


Using Certificate to connect via SSH to your Linux Machine

The world of computer is changing and is it not possible anymore to live in an isolated silo where you use only one operating system or technology, and this is the reason why, even if my last 20 years of development and work were made exclusively on Windows technologies, in this last 2 years I had many occasion to use Linux machines.

I used Linux at the very beginning, using Slackware distribution and I was the typical guy that use command line. At that time I used Windows Maker as UI but all the administration was done via console. This is the reason why, when I need a Linux VM (docker, ElasticSearch, Solr, Mongo, etc) I always install without a GUI.

I use local Hyper-V host or Azure, and usually connect to machine via SSH with standard username and password, but this is not the optimal situation because I have lots of VM. With many Vm you can use the same password for all machines, or use a password manager to keep track of all the passwords, but this is not so secure (even if all environment are test ones).

A better solution is using certificate to connect to your Linux machine via SSH.

If you constantly use Linux VM I strongly suggest you to use SSH with certificates, because it is far more secure and manageable than using Username and Password. I’m not a security expert, but I want to simply explain how I enabled Certificated based auth in SSH to get rid of all of my passwords.

First of all you should generate a certificate with ssh-keygen.

gianmaria@ubuntu:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/gianmaria/.ssh/id_rsa):
Created directory '/home/gianmaria/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/gianmaria/.ssh/id_rsa.
Your public key has been saved in /home/gianmaria/.ssh/
The key fingerprint is:
22:05:82:3e:55:14:79:06:56:70:83:63:45:88:05:67 gianmaria@ubuntu
The key's randomart image is:
+--[ RSA 2048]----+
|  =OE..+. .      |
| .*+ o+ o. .     |
| . . + o    .    |
|    . +    .     |
|     o .S o      |
|      .  . .     |
|                 |
|                 |
|                 |

The only option this command is asking you is a password to encrypt the certificate, if you prefer more security please choose a strong password.

Now you have two files in your .ssh directory, one is the private key and the other is the public key. Asymmetric encryption permits you to verify a private certificate with the public key, and SSH has a file in .ssh/authorized_key of each user account where you should list all Public Keys of certificates allowed to login to that account. Now that you have a valid Public key you can add it to the list of authorized key to connect to the current account with a simple account.

gianmaria@ubuntu:~/.ssh$ cat >> authorized_keys

Each account has a list of Public Keys related to certificate that are allowed to login with that account

Now you can use the private key to connect to this machine without the need to use username and password. The client should have access to private key, so the server can validate that key agains the list of valid Public keys (.ssh/authorized_keys) to understand if it is entiled to login.

Private key is usually password protected (remember that a password is asked during generation of the certificate) and you should store private key file in a really secure place. I’ve transferred both the file to my windows machine thanks to winSCP, then store my private and public key in keepass to securely store toghether with my passwords. Then I’ve created a folder in my system that uses encryption, and copied certificate file in that folder.

Now if you use Putty, you should use puttygen.exe to convert id_rsa file (private key) in putty format, then you can use to login with putty. This excellent tutorial will guide you through all the steps.

If you choose a password during generation of the certificate, the password will be ask to you each time you connect to the server. If you can accept less security for test machines, you can generate certificate without specifying a password. If the private key certificate has no password, you can simply double click on stored session on putty and you are logged.

Remember that, if the private key is not password protected, anyone that can get the file is able to login to the machine without any problem

I strongly suggest you to never ever generate a certificate without a strong password for every production server or for any machine where you store data you are about.

Finally, thanks to mRemoteNg ability to read putty saved sessions, I can now simply double click the connection and I’m logged to my machines.


Figure 1: mRemoteNG is able to read PuTTY saved session, now you can double click the link and you are logged.

Gian Maria.