How to security expose my test TFS on the internet

I’m not a security expert, but I have a basic knowledge on the argument, so when it is time to expose my test TFS on the outside world I took some precautions. First of all this is a test TFS instance that is running in my test network, it is not a production instance and I need to access it only sometimes when I’m outside my network.

Instead of mapping 8080 port on my firewall I’ve deployed a Linux machine, enabled SSH and added google two factor authentication, then I expose port 22 on another external port. Thanks to this, the only port that is exposed on my router is a port that remap on port 22 on my linux instance.

Now when I’m on external network, I use putty to connect in SSH to that machine, and I setup tunneling as for Figure 1.

image

Figure 1: Tunneling to access my TFS machine

Tunneling allows me to remap the 8080 port of the 10.0.0.116 machine (my local tfs) on my local 8080 port. Now from a machine external on my network I can login to that linux machine.

image

Figure 2: Login screen with verification code.

This is on a raspberry linux pi, I simply use pi as username, then use verification code from my cellphone (google authenticator app) and finally the password of my account.

Once I’m connected to the raspberry machine I can simply browse http://localhost:8080 and everything is redirected through a secure SSH tunnel to the 10.0.0.116 machine. Et voilà I can access any machine, any port in my network just using SSH tunneling.

image

Figure 3: My local TFS instance now accessible from external machine

This is surely not a tutorial on how to expose a production TFS instance (please use https), but instead is a simple tutorial on how you can access every machine in your local lab, without the need to expose directly the port on your home router. If you are a security expert you will probably find flaws in this approach, but surely it is better than directly map ports on the router.

Gian Maria.

Rename xmlrpc.php on your WordPress installation

Last month my Live Writer stopped being able to publish on this blog. Using Fiddler I discovered that I got an error calling xmlrpc.php. I’ve opened a ticket on my blog hosting, and the support told me that access to xmlrpc.php was blocked on that machine (my blog is on a shared host) because of spam attack.

The problem is, malicious script try continuously to send spam content to WordPress blogs using standard API and this usually causes a spike in CPU usage. For this reason many providers block access to xmlrpc.php file.

If you care about your WordPress blog, leaving xmlrpc.php unchanged is a bad practice, because it left you vulnerable to attacks. The solution is simple, I’ve simply renamed my xmlrpc.php file to something strange, something like xjearhsaherbsaera.php. Using a random series of letters makes your wordpress API endpoint not discoverable by an attacker. If it try to access xmlrpc.php he immediately got a 404 error, and it is really unlikely that they are able to guess the real name of the file.

Thanks to this solution I can use again my Windows Live Writer, even if my provider blocks xmlrpc.php file.

Gian Maria.

Cleaning up your WSUS Server

The situation

I’ve a WSUS server that is up and running for last 2 years, it runs on a HP Proliant Microserver where I have a domain controller for my test lab. The purpose of WSUS Server is reducing download time for updates when I install test VMs. I’ve lots of Test VM, and the time needed to download updates is really great, so I’ve decided to configure a WSUS server to mitigate that problem.

Since I really do not need to do a fine grained management for a real production environment, I’ve setup my WSUS server to download almost every update, for 2 years I only approved updates without any further management, and one day my WSUS server became so slow that console crashes 90% of the time and I’m not able anymore to approve new updates.

Time to clean up!!

There are a lot of great articles in the internet on how to clean up a WSUS Server, I just want to share with you my experience to make you avoid my mistakes and have your WSUS server fully operational in the least time possible.

Move to a better hardware

My experience was: if your WSUS server is slowed down too much, there is no way you can recover it without moving to a new hardware.

To give you an extent of what I’m saying, launching standard cleanup on my WSUS Server, reached almost 20% in 24 hours, when moved on fast hardware the whole operation took 8 hours. If WSUS is virtualized, do a temporary move of the VM to a new hardware with faster cpu and SSD, but in reality you can simply move your DB to a faster hardware and you are ready to go.

The goal is: moving DB to fastest machine in your network, do maintenance, then move database back to original WSUS location. There are tons of resources on how to move your database on new hardware, this is the link I’ve followed and it is just a matter of installing SQL management studio, detatch your db, and re-attach on a new SQL Instance.

Be sure that your destination SQL Server is the Very Same Version of the one used by your original WSUS Installation

When you connect to your original WSUS Database (in Windows server 2012 R2 instance name is

\\.\pipe\Microsoft##WID\tsql\query issue a Select @@version to verify sql server version (in my situation is 2012). Since cleanup operations use at most one cpu on DB Server (at least in my situation) I moved db on an i7 2600K overclocked to 4.2 GHz with a Samsung 840 SSD Disk. During cleanup operations sqlserver.exe used only a single core, so I suggest you to move your db to a machine with poweful CPU.

Be sure that the machine with temporary SQL Server installation is joined to the domain or you will not be able to access the database from WSUS Machine.

In order to be 100% sure that WSUS machine is able to access database, configure the machine where WSUS is running as admin of temporary SQL Server.

User configuration for WSUS Server in Sql SErver.

Figure 1: Neuromancer is the computer running WSUS and is added as a login identity to SQL Server to make WSUS Service (that is running as network service) being able to reach SQL Running in another machine of the domain

Rebuild your indexes

You can find instruction at this address on how to rebuild all of your indexes of your WSUS DB. With 11 GB db the whole operation on a SSD took no more than 15 minutes. This operation is really important, because if some index is greatly fragmented, all cleanup operations will be really slow.

What to cleanup

The very first and fundamental technique is avoiding download drivers update.

There is absolutely no need to let WSUS manage drivers update, in my system, after I removed drivers updates, my total number of updates dropped from 62000 to 12000.

If you read articles in the internet, you can believe that there is no way to really delete an update from WSUS, you can only decline it, but it will remain on database. This is true if you are using the user interface with MMC, but if you use powershell you can run a little script that remove all the updates of type drivers from your system. In that article the describe a technique to remove update directly from database; I absolutely discourage you to use this technique because you can potentially destroy everything. At the end of the article you can find a simple Powershell Script that is capable of removing updates from WSUS in a supported way.

[reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration")
$wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer();
$wsus.GetUpdates() | Where {$_.UpdateClassificationTitle -eq 'Drivers'} | ForEach-Object {$wsus.DeleteUpdate($_.Id.UpdateId.ToString()); Write-Host $_.Title removed }

This script can require days to run in a standard machine. Thanks to fast CPU and SSD, my 40000 drivers where deleted in almost 9 hours. This will certify the need to temporary move DB to a really fast machine.

This script is useful also to remove updates related to removed classifications. As an example, now that Windows XP is out of support, if you do not have any Windows XP machine in your lab, you can remove XP from classification and can run the script to remove all updates that refers to windows XP. So please review all of your classifications to reduce the number of approved updates.

Finally, you can also use the script to delete all the updates that are declined and superseded by other updates . Once you finish removing all unnecessary updates, your WSUS Server should be really faster.

Final cleanup

After you removed all unnecessary updates, you should do a standard WSUS Cleanup Wizard, followed by another full rebuild of all indexes, then everything is ok.

Another suggested option is defragmenting the drive where WSUS DB is located, an operation that can be done after you stopped the instance that is using DB. If you followed my suggestion of moving your DB to a better hardware, I suggest to move the database to a drive with plenty of space in your WSUS Server.

In my situation C: Drive is 120 GB of space and has 50 GB free. After all cleanup operation are finished, I detatched DB from temporary location and decided to move the file on a 2 TB drive that has 1 TB Free. With that amount of free space, after the copy database files are completely defragmented.

Now my WSUS Server is operational again.

Gian Maria

Windows Server 2012 R2 switching to AHCI after installation

I’ve installed some weeks ago a new server, and at the time of installation I did the really bad error of not fully checking BIOS settings. This week I moved a couple of SSD to that server, because it will be used for virtualization, and I did noticed that my Vertex 4 is performing really slower respect the original system, so I immediately checked and verified that I forgot to enable AHCI in the BIOS.

SHAME ON ME!!!!!

If you ever had this problem in the past, you knows that if you simply reboot your machine, enable AHCI in the BIOS and reboot again in Windows you will be welcomed with a beautiful Blue Screen telling you that the system is not able to boot.

With windows server 2012 I found that a reboot in Safe Mode is enough. Just reboot the machine, enter in the BIOS, then enable AHCI for my motherboard then reboot again. Now press F8 to open windows boot menu and choose Safe Mode, the system should boot correctly.

Now simply reboot in standard mode and everything should work correctly. At least it worked in my system.

Gian Maria.

Samsung EVO 840 500GB VS OCZ Vertex 4 256GB

I’ve decided to move to a 500 GB SSD to have plenty of space to store my virtual machines. At the time of the decision, Samsung EVO was the better option. I’ve posted in the past benchmarks for my Vertex4 256 GB

And I was really curious about performance of Samsung EVO. Vertex 4 has a real impressive write speed, even if after one year of use if you test it again you can find a decrease in performance. Running test on the Evo gave me these results.

image

Actually it seems to outperform the Vertex 4 and this makes me really happy, because I feared that the Evo would have been a little slower in writes than Vertex 4, but it seems to be surely comparable. This test was also done with a 500MB file size while the original Vertex 4 test has 100MB file size, for the sake of comparison I decided to run another test with the same 100MB setting, but the result is substantially the same (little standard fluctuation of numbers)

Samsung has also a nice feature in his Magician software for Evo SSD, that can actually use system RAM to further improve performance of SSD. Magician software has also a nice feature to upgrade firmware of the SSD directly from your Windows operating system even if it is system disk. Creating a bootable CD or Usb stick with firmware update program is not a big problem, but pressing a button and having your firmware update is a nice facility.

After updating firmware and some OS optimization suggested by magician software I re-run the test again, just to see if there is anything changed (I’m not expecting any substantial changes). Here is the result if you are curious.

image

As expected the data are substantially the same, some little fluctuation are quite standard. Then I decided to enable Rapid mode, a technique that uses computer RAM to accelerate disk operations. I perfectly know that such kind of optimization should be tested in everyday use, but I decided to have another run of CrystalDiskMark to understand what happened. Here is the result

image

I really must admit that I found difficult to really squeeze something out of this numbers, the increment in the diskmark is probably due to how the test is performed and I doubt that this is a “real” improvement in the everyday use, but I’ll leave it enabled just to understand if I’ll perceive some “change” respect having it not activated.

Gian Maria.