Using vmWare machine when you have Hyper-V

There are lots of VM containing Demo, Labs etc around the internet and surely Hyper-V is not the primary target as virtualization system. This because it is present on desktop OS only from Windows 8, it is not free (present in windows professional) and bound to windows. If you have to create a VM to share in internet, 99% of the time you want to target vmWare or Virtual Box and a linux guest system (no license needed). Since Virtual Box can run vmWare machine with little problem, vmWare is de-facto the standard in this area.

Virtual Machines with demo, labs etc that you find in the internet are 99% targeted to vmWare platform.

In the past I’ve struggled a lot with conversion tools that can convert vmWare disk formats to Hyper-V format, but sometimes this does not work because virtualized hardware is really different from the two systems.

If you really want to be productive, the only solution I’ve found is installing an ESXi server on an old machine, an approach that gives me lots of satisfaction. First of all you can use the Standalone conversion tool of vmware to convert a vmWare VM to OVF standard format in few minutes, then upload the image to your ESXi server and you are ready to go.


Figure 1: A simple command line instruction convert VM into OVF format


Figure 2: From the esxi interface you can choose to create a new VM from OVF file

Once you choose the ofv file and the disk file you just need to specify some basic characteristics for the VM and then you can simply let the browser do the rest, your machine will be created into your ESXi node.


Figure 3: Your VM will be created directly from your browser.

The second advantage of esxi is that it is a real mature and powerful virtualization system available for Freee. The only drawback is that it needs a serious Network Card, it will not work with a crappy card integrated into a consumer Motherboard. For my ESXi test instance I’ve used my old i7-2600K with a standard P8P67 Asus motherboard (overclocked) and then I’ve spent a few bucks (50€ approx) to buy a used network card 4xGigabit. This gives me four independent NICs, with a decent network chip, each one running at 1Gbit. Used card are really cheap, especially because there are no driver for latest operating system so they are thrown away on eBay for few bucks. When you are using a Virtual Machine to test something that involves networks, you will thanks ESXi and decent multiple NIC card because you can create real network topology, like having 3 machines each one using a different NIC and potentially connected to different router / switch to test a real production scenario.

ESXi NIC virtualization is FAR more powerful than Virtual Box or even vmWare Workstation when installed with a real powerful NIC. Combined with multiple NIC card you have the ability to simulate real network topologies.

If you are using Linux machine, vmWare environment has another great advantage over Hyper-V, it supports all resolutions, you are not limited to Full-Hd with manual editing of grub configuration, you can change your resolution from Linux control panel or directly enable live resizing with the Remote Console available in ESXi.

If you really want to create a test lab, especially if you want to do security testing, having one or more ESXi hosts is something that pays a lot in the long distance.

Gian Maria

Esxi, Hyper-V and Linux

I mainly use Hyper-V to virtualize my test environments and I’m really happy with it, the only problem is virtualizing Linux Desktop environments, especially if you have monitors with higher resolution than Full-HD (since in Hyper-V I’ve not found a way to run with a greater resolution than Full HD).

To overcome this limitation, I’ve converted my old workstation in a virtualization host running VMWare ESXi and I’m really satisfied. Here is a couple of tricks that I’ve learned (I’m a completely new to latest version of ESXi).

ESXi is free and it is a really powerful virtualization system, if you have hardware to spare, I strongly suggest you to have a ESXi instance to being able to run both Hyper-V and VmWare based virtual machines

First of all, you need to buy a new network adapter, ESXi is really picky about your network card and it refuses to install if you only have the crappy integrated Ethernet card. I’ve taken a 4x1GB old Intel card used on ebay. If you look you can find old board that are perfect to run with ESXi at a really cheap price. Once you have a good Ethernet adapter you are ready to go. Here are my network Physical NICs


Figure 1: Nic adapter on my system.

I strongly suggests you to read the Compatibility Guide, from my experience, Intel cards are the most compatible one, even if they are really old. This card of mine does not work in windows 2012 or superior edition (it is really really old), but it works like a charm in ESXi 6.5, it has 4 physical NIC and it costs me around 40€)

Another thing I’ve learned is not to use the web interface to access Linux Machines, since I’m in Italy I have an Italian Keyboard Layout, and I had lots of problem with key mapping for Linux machines when I access them with standard web interface. The  problem happens because, when you started your VM, it is super normal to click the preview to open a web interface to interact with the machine


Figure 2: Click on the preview, and you will access the machine with a web interface

If you instead click on the Console menu, you can download a stand alone remote console tools (available for all operating systems) that allows you to connect to your virtual machines and avoid having keyboard problem.

Latest version of ESXi can be entirely managed by Web Interface, but to interact with Virtual Machines, the best solution is to use the VMRC standalone tool.


Figure 3: Download VMRC standalone software to connect to your machines

Once you downloaded and installed the VRMC tool you can simply use the “Launch Remote Console” menu option and you will be connected to your machine with a really nice standalone console that will solve all of your keyboard problem.

Gian Maria.

Dotnetcore, CI, Linux and VSTS

If you have a dotnetcore project, it is a good idea to setup continuous integration on a Linux machine. This will guarantee that the solution actually compiles correctly and all the tests run perfectly, even in  Linux environment. If you are 100% sure that, if a dotnetcore project runs fine under Windows it should run fine under Linux, you will have some interesting surprises. The first and trivial difference is that Linux filesystem is case sensitive.

If you use dotnetcore, it is  always a good idea to immediately setup a Build against a Linux environment to ensure portability.

I’m starting creating a dedicated pool for Linux machines. Actually having a dedicated pool is not necessary, because the build can require Linux capability, but I’d like to start having all the Linux build agent in a single place for easier management.


Figure 1: Create a pool dedicated to build agents running Linux operating system

Pressing the button “download agent” you are prompted with a nice UI that explain in a really easy way how you should deploy your agent under your linux machine.


Figure 2: You can easily download the agent from VSTS / TFS web interface

Instruction are detailed, and it is really easy to start your agent in this way: just running a configure shell script and then you can run the agent with another shell script.

There is also another interesting approach, you can give a shot to the official docker image that you can find here: The only thing I need to do is running the docker image with the command.

sudo docker run   -e VSTS_ACCOUNT=prxm -d -e VSTS_TOKEN=my_PAT_TOKEN-e VSTS_AGENT='schismatrix' -e VSTS_POOL='Linux' -it microsoft/vsts-agent

Please be patient on the first run because the download can take a little bit, the docker image is pretty big, so you need to patiently wait for the download to finish. Once the docker image is running, you should verify with sudo docker ps that the image is really running fine and you should check on the Agent Pool page if the agent is really connected. The drawback of this approach is that currently only Ubuntu is supported with Docker, but the situation will surely change in the future.

Docker is surely the most simple way to run a VSTS / TFS linux build agent.

Another things to pay attention is running the image with the –d option, because whenever you create a new instance of vsts agent from the docker base image, the image will download the latest agent and this imply that you need to wait a decent amount of time before the agent is up and running, especially if you, like me, are on a standard ADSL connection with max download speed of 5 Mbps.


Figure 3: Without the –d option, the image will run interactively and you need to wait for the agent to be downloaded

As you can see from the image, running a new docker instance starts from the base docker image, contacts the VSTS server and download and install the latest version of the agent.


Figure 4: After the agent is downloaded the image automatically configure and run the agent and you are up and running.

Only when the output of the docker image states Listening for Jobs the agent should be online and usable.


Figure 5: Agent is alive and kicking

Another interesting behavior is that, when you press CTRL+C to stop the interactive container instance, the docker image remove the agent from the server, avoiding the risk to left orphan registration in your VSTS Server.


Figure 6: When you stop docker image, the agent is de-registered to avoid orphan agents registration.

Please remember that, whenever you stop the container with CTRL+C, the container will stop, and when you will restart it, it will download again the VSTS agent.

This happens because, whenever the container stop and run again, it need to redownload everything that is not included in the state of the container itself. This is usually not a big problem, and I need to admit that this little nuance is overcome by the tremendous simplicity you have, just run a container and you have your agent up and running, with latest version of dotnetcore (2.0) and other goodness.

The real only drawback of this approach, is that you have little control on what is available on the image. As an example, if you need to have some special software installed in the build machine, probably you need to fork the container and configure the docker machine for your need.

Once everything is up and running (docker or manual just fire a build and watch it to be executed in your linux machine.


Figure 7: Build with tests executed in Linux machine.

Gian Maria

Error in WSUS after Windows Update KB3148812

I have a test lab with a Windows Server 2012 R2 domain controller, and one of the feature I like the most is WSUS, that allows me to spin of an update new Virtual Machines without the need to wait for all Windows Update to be downloaded from the Internet.

Yesterday I noticed that suddently the WSUS service stopped working, a couple of Test VS gaves me error trying to connect to the update service, and in the WSUS Server the service was indeed stopped.

After a look in Event Viewer I was able to track down that the reason is the database is not operational. I fired Sql Server Management studio and connected to the internal database with the address


And noticed that the WSUS Database was in Recovering state, the bad stuff is that I restored a previous backup but the problem did not go away. After a quick search I found that someone blamed the update KB3148812 to be the cause of this problem. After uninstalling that update and rebooting the server the issue went away.

Gian Maria.

How I brought my Nexus7 Back to life

I own a Nexus 7 (2012 version) and I was pretty satisifed with it, it is small, and I used primarly with Zite / Flipboard and to quickly read some news feed. The device is nice, but it has crappy hardware and, after a couple of years, it stops charging.

It turns out that there is some problem on the USB port that prevent proper charging, the simplest solution is buying a Docking Station, but asus decided to stop producing it, and now you can find on the internet at really high price.

The cheapest solution is buying a Flex Charger Cable and replace by yourself. After replacing the tablet started to charge correctly.

Then I noticed that even after a full reset, it is really slower than in the past, so I decided to check if it is a problem in the lollipop upgrade. Google has a public page where you can find all the stock roms for Nexus 7, so I downloaded the Android SDK, downloaded kitkat 4.4.4 then restore the device to KitKat in almost less than half an hour.

Now my tablet is back to life, it has no astonishing speed, but it is perfectly usable, as it was in the past. I’m also happy because I was able to avoid wasting a device that can still be useful.

In the end there are a couple of compliants I have on Nexus 7.

1) If the device has a known problem with Usb Charging (lots of user in the internet claim to have this problem) asus should have continued to produce docking station, instead of forcing people to open and fix the device.

2) Lollipop is really slow on Nexus 7, Google should do better testing of new roms against official old devices, and warn the user that upgrading to the latest version can impact on performance. Also I’d like a simple option to being able to rollback to previous Android version directly on my device, even if it will reset to factory settings.

Gian Maria.