Release to Azure with Azure ARM templates

Thanks to new Release Management system in VSTS / TFS creating a release to your on-premise environment is really simple (I’ve described the process here). Another option is creating a test environment in Windows Azure, and if you choose this option life can be even easier.

In this example I’m using Azure as IAAS, deploying a software on a Windows Virtual Machine. While this is probably not the best approach to cloud (PAAS is surely a better approach) to create a test environment it can be perfectly acceptable.

I’m not going to give you an introduction or explanation of Azure Resource Manager because there are tons of resources on the web and Azure moves so quickly that every information I’m going to give you will probably will be old by the time I press “publish” :). The purpose of this post is giving you a general idea on how to use Azure ARM to create a release definition that automatically generates the resource in azure and deploy your software on it.

My goal is using Azure ARM for DevOps and Automatic / Continuous Deployment and the first step is creating a template file that describes exactly all the Azure resource needed to host my application. Instead of starting to write such template file I start checking on GitHub becuse there are tons of template files ready to use.

As an example I took one of the simplest, called 101-vm-simple-windows. It creates a simple Windows Virtual Machine and nothing else. That template has various parameters that can allow you to specify VM Names and other properties, and it can be directly used by a Release Management definition.  I’ve done simple modification to the template file and in this situation it is better to first check if everything is working as expected triggering the deploy process directly from command line.

New-AzureRmResourceGroupDeployment 
	-Name JarvisRm 
	-ResourceGroupName JarvisRm
	-TemplateFile "azuredeploy.json" 
	-adminUsername alkampfer 
	-adminPassword ********** 
	-vmName JarvisCmTest 
	-storageAccount jarvisrmstorage 
	-dnsLabelPrefix jarvisrm

As you can see I need to choose the name of the resource group (JarvisRm) specify the template file (azuredeploy.json) and finally all the paramters of the template as if they are parameter of the PowerShell cmdlet. Once the script finished verify that the resource group was created correctly and all the resources are suitable to deploy your software.

image

Figure 1: Your Resource group was correctly created.

Once everything is correct, I deleted the JarvisRm resource group and I’m ready to use the template on a release definition.

Always test your ARM template directly from command line, to verify that everything is allright. When the resource are created try to use them as manually target of a deploy and only once everything is ok start automating with Release Management.

When you have a good template file the best place to store it is in your source control, this allow you to version this file along with the version of the code that is supposed to use it. If you do not need versioning you can simply store in a network share, but to avoid problem it is better to have Release Management Agent run the template from a local disk and not from a network share.

image

Figure 2: Copy template file from a network share to a local folder of the agent.

First step of the release process is copying template files from a network share to $(System.DefaultWorkingDirectory)\ARM folder so PowerShell can run against script that are placed on local disk. The second task is Azure Resource Group Deployment that uses the template to deploy all Resources to Azure.

image

Figure 3: The Azure Deployment task is used to create a Resource Group from a Template definition.

You should specify only template file (1) and  all the parameters of the template (2) such as userName, password, dns name of the VM etc. As a nice option you can choose to Enable Deployment Prerequisites (3) to have your VM being able to be used as a target for Deploy Action. You can read more about prerequisites on MSDN blog, basically when you select this action the script will configure PowerShell and other option on the target machine to being able to execute script remotely.

Virtual machines need to be configured to be used as target of deploy tasks such as remote PowerShell execution, but the Azure deployment task can take care of everything for you.

This task requires that you already connected the target Azure subscription with your VSTS account. If you never connected your TFS / VSTS account to your Azure subscription with ARM, you can follow the instruction at this link, that contains a PowerShell script that does EVERYTHING for you. Just run the script, and annotate in a safe place all the data you should insert to your TFS / VSTS instance to connect to Azure with ARM.

Antoher aspect you need to take care of, is the version of PowerShell azure tools installed in the machine where the Release Agent is running. Release Management script are tested agains specific version of Azure PowerShell tools, and since the Azure team is constantly upgrading the Tools, it coudl happen that TFS / VSTS Release Management Tasks are not compatible with the latest version of the Azure Tools.

All of these tasks are open sourced, and you can find information directly on GitHub. As an example at this link tjere are the information about the DeployAzureResourceGroup task. If you go to the bottom you can verify the PowerShell tools version suggested to run that task.

image

Figure 4: Supported version of AzureRM module version

Clearly you should install a compatible version in the machine where the Agent is installed. If you are unsure if the agents has a suitable version of Azure PowerShell tools, you can go to TFS Admin page and verify capabilities of the agent directly from VSTS / TFS

image

Figure 5: Agent capabilities contains the version of Azure PowerShell tools installed

Demand for Azure PS is present on Release Definition, but it does not specify the version, so it is not guarantee that your release is going to be successful.

image

Figure 5: Release process has a demands for Azure PowerShell tools but not to a specific version.

As a result I had problem in setting up the Release Process because my agents has PowerShell tools version 1.4 installed, but they are not fully compatible with Release Manager Task. Downgrading the tools solved the problem.

If your release fails with strange errors (such as NullReferenceException) you need to check in GitHub the version of PowerShell tools needed to run that task and install the right version in the agent (or at least you can try to change the version until you find the most recent that works)

The Azure Resource Group Deployments takes care of everything, I’ve modified the base script to apply a specific Network Security Group to the VM but the general concept is that it configures every Azure Resource you need to use. At the end of the script you have everything you need to deploy your software (Virtual Machines, sites, databases, etc).

In my example I need only a VM, and once it is configured I can simply use Copy to Azure VM task and Execute PowerShell on Azure VM Task to release my software, as I did for my on-premise environment.

image

Figure 6: Configuration of the Task used to copy files to Azure VM

You can specify files you want to copy (1) login to the machine (2) and thanks to Enable Copy Prerequisites (3) option you can let the Task takes care of every step needed to allow copy file to the VM. This option is not needed if you already choosen it in the Azure Deployment task, but it can be really useful if you have a pre-existing Virtual Machine you want to use.

Final Step is executing the release script on target machine, and it has the same option you specify to run a script on a machine on-premise.

image

Figure 7: Run the installation PowerShell script on target Azure VM

Once everything is in place you only need to create a release and wait for it to be finished.

image

Figure 8: Output of release definition with Azure ARM

With this example, since I’m using a Virtual Machine, deploy script is the same I used for on-premise release, with PAAS approach, usually you have a different script that target Azure specific resources (WebSites, DocumentDb, etc)

If the release succeeded you can login to portal.azure.com to verify that your new resource group was correctly created Figure 1, and check that in the resource group there are all the expected resources Figure 9.

image

Figure 9: Resources created inside the group.

To verify that everything is ok you should check the exact version of the software that is actually deployed on the environment. From Figure 10 I can see that the release deployed the version 1.5.2.

image

Figure 10: List of all most recents releases.

Now I can login to the VM and try to use the software to verify that it is correctly installed and that the version installed is correct.

image

Figure 11: Software is correctly installed and the version corresponds to the version of the release.

Azure Resource Management is a powerful feature that can dramatically simplify releasing your software to Azure, because you can just download scripts from GitHub to automatically creates all Azure Resources needed by your application, and lets VSTS Release Management tasks takes care of everything.

Gian Maria.

Manage Environment Variables during a TFS / VSTS Build

To avoid creating unnecessary build definition, it is a best practice to allow for parameter overriding in every task that can be executed from a build. I’ve dealt on how to parametrize tests to use a different connection string when tests are executed during the build and I’ve used Environment variables for a lot of reasons.

Environment variables are not source controlled, this allows every developer to override settings in own machine without disturbing other developers. If I do not have a MongoDb in my machine I can simply choose to use some other instance in my network.

image

Figure 1: Overriding settings with environment variables.

Noone in the team is affected by this settings, and everyone has the freedom of changing this value to whatever he/she like. This is important because you can have different version of MongoDb installed in your network, with various different configuration (MMapV1 or Wired Tiger) and you want the freedom to choose the instance you want to use.

Another interesting aspect of Environment variables, is that they can be set during a VSTS/TFS build directly from build definition. This is possible because Variables defined for a build were set as environment variables when the build runs.

image

Figure 2: Specifing environment variables directly from Variables tab of a bulid

If you allow this value to be allowed at Queue Time, you can set the value when you manually queue a build.

image

Figure 3: Specifying variables value at Queue Time

If you look at Figure 3, you can verify that I’m able to change the value at queue time, but also, I can simply press “Add variable” to add any variable, even if it is not included in build definition. In this specific situation I can trigger a build and have my tests run against a specific MongoDb instance.

Remember that the value specified in the build definition overrides any value that is set on environment variables on build machine. This imply that, once you set a value in the Build definition, you are not able to change the value for a specific build agent.

It you want to be able to choose a different value for each build agent machine you can simply avoid setting the value on the Variables tab and instead define the variable on each build machine to have a different value for each agent. Another alternate approach is using two Environment variables, Es: TEST_MONGODB and TEST_MONGODB_OVERRIDE, and configure your tests to use TEST_MONGODB_OVERRIDE if present, if not present use TEST_MONGODB. This allows you to use TEST_MONGODB on build definition, but if you set TEST_MONGODB_OVERRIDE for a specific test agent, that agent will use that value.

Another interesting aspect of Environment Variable is that they are included in agent capabilities, as you can see from Figure 4.

SNAGHTMLd39383

Figure 4: All environment variables are part of agent Capabilities

This is an important aspect because if you want that variable to be set in the agent, you can avoid to include in Variables tab, and you can require this build to be run on an agent that has TEST_MONGODB environment variable specified.

image

Figure 5: Add a demand for a specific Environment Variable to be defined in Agent Machine

Setting the demands is not always necessary, in my example if the TEST_MONGODB variable is not definied, tests are executed against local MongDb instance. It is always a good strategy to use a reasonable default if some setting is not present in Environment Variables.

Gian Maria.

Scale out deployment error when migrating Reporting Services

Part of moving your TFS Server on new hardware, or creating a Pre-Production environment is restoring Reporting Server database. Since Datatbase are encrypted, if you simply restore the database then configure Reporting Services on new machine to use restored database, this operation will fail, because the new server cannot read encrypted data.

Restoring Reporting Services database into a new instance involves some manual steps, especially you need to backup / restore encryption keys from your old server to the new one.

Clearly you should have a backup of your encryption key, this is a part of a good backup process, and it is automatically performed by the standard TFS Backup wizard. If you never backupped your encryption key I strongly suggest to DO IT NOW. The backup procedure can be done manually from Sql Server Reporting Service Configuration Manager.

image

Figure 1: Backup your encryption key

You should choose a strong password and then you can save the key in a simple file that can be moved and imported on the machine where you restored your Reporting Services database. On that server you can simple use the Restore feature to restore the encryption key so the new installation is now able to read encrypted data from database.

If the name of the machine is changed, as an example if you perform a restore on a Test Environment, when you try to access your reporting server instance you probably will got this error

The feature: “Scale-out deployment” is not supported in this edition of Reporting Services. (rsOperationNotSupported)

This happens because the key you restored belongs to the old server, and now your Reporting Server instance believe that it is part of a multi server deployment, that is not supported in standard edition of Reporting Services.

image

Figure 2: Scale out Deployment settings after key is imported.

In Figure 2 you can verify that, after importing the encryption key, I have two server listed, RMTEST is the name of the machine where I restored the DB while TFS2013PREVIEWO is the name of the machine where Reporting Service was installed. In this specific scenario I’m doing a clone of my TFS Environment in a Test Sandbox.

Luckly enough there is this post that explain the problem and gives you a solution. I must admit that I dot feel really confortable in manually manipulation of Reporting service database, but the solution always worked for me. As an Example, in Figure 3 you can see that I have two entries in the Keys table of reporting database

image

Figure 3: Keys table contains both entries for keys.

After removing the key of TFS2013PREVIEWO from the database the Scale Out settings come back to normal, and reporting services starts working.

image

Figure 4: Reporting services are now operational again.

Gian Maria.

Isolate your TFS Pre-Production environment for maximum security

In previous post I’ve explained how to create a clone of your TFS Production environment thanks to the new TFS “15” wizard. With this post I want to share you a simple solution I have in my Bags of Tricks to avoid your TFS Cloned environment to interfere with production environment.

The problem

In my environment I have all machines in network 10.0.0.0/24, my TFS has address 10.0.0.116 and Primary Domain Controller is 10.0.0.42. Then I have automated build and Release Management Definitions that deploy against various machines, : 10.0.0.180, 10.0.0.181, 10.0.0.182, etc.

Even if I used the wizard, or Command Line instructions to change TFS Servers id, there is always the risk that, if a build starts from cloned environment, something wrong will be deployed to machines used by production environment (10.0.0.180, etc).

Usually the trick of changing hosts files in PreProduction TFS machines is good if you always use machine names in your build definition, but if I have a build that directly deploy to 10.0.0.180 there is nothing to do. This exposes me at risk of production environment corruption and limits my freedom to freely use cloned TFS environment.

What I want is complete freedom to work with cloned TFS Environment without ANY risk of accessing production machines from any machine of Cloned Enviroments (Build controllers, test agents, etc).

Virtualization to the rescue

Instead of placing pre-production environment in my 10.0.0.0/24 network, I use Hyper-V virtual networking capabilities to create an internal network.

image

Figure 1: Virtual networks configured in Hyper-V hosts

In Figure 1 I depicted what I have clicking the Virtual Switch Manager setting (1), and I have a virtual switch called “Internal Network” (2) that is configured as internal network (3). This means that this network can be used by all VM to communicate between them and with the host, but there is no possibility to communicate to the real Production Network. The Physical network card of Hyper-V host is bound to a standard “External Network”, it is called “ReteCablata” (4) and it is the network that can access machines in Production Network.

With this configuration I decided to install all machines that will be used for TFS Pre Production (server, build, etc) using only the “Internal Network”. The machine I’ll use as Pre Production TFS has the address 10.1.0.2, while the Hyper-V host will have the 10.1.0.254 address. This allows for my Hyper-V hosts to communicate with the Virtual Machine through the Internal Network virtual network interface.

Now if I try to login to the machine with domain credentials I’ve got a bad error as result.

image

Figure 2: I’m unable to login with domain users, because domain controller is unavailable.

Accessing with local user is good, and the reason why I cannot login as domain user is because the machine is not able to reach the domain controller, since it lives in another virtual network.

Thanks to this solution I’ve created an isolated subnetwork where I can create my TFS Pre-production / Test environment without the risk of corrupting the Production Environment

Thanks to Virtual Networking it is easy to create a virtual network completely isolated from your production environment where you can safely test cloned environment

Iptables to route only what-you-want

At this point I have an isolated environment, but since it cannot access my domain controller, I have two problems:

1) PreProduction / Test Tfs cannot access the domain, and no domain users can access TFS
2) To access the PreProduction / Test TFS you can only use the Hyper-V host.

Clearly this made this approach almost impraticable, but the solution to this limitation is really really quick. Just install a Linux machine in the Hyper-V host to act as a router, in my example I have a standard Ubuntu Cloud server without UI. The important aspect is that you need to assign both Virtual Networks to the machine, so it can connect with both your isolated environment “Internal Network” and production environment “Rete Cablata”.

image

Figure 3: Create a Linux VM and be sure to assign both network interfaces.

In my box the physical network (ReteCablata) is eth0 while the internal network is eth1, both interface have static ip, and this is the configuration.

gianmaria@linuxtest1:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
     address 10.0.0.182
     netmask 255.255.255.0
     gateway 10.0.0.254
     dns-nameservers 10.0.0.42

auto eth1
   # iface eth1 inet dhcp
   iface eth1 inet static
   address 10.1.0.1
   netmask 255.255.255.0
   network 10.1.0.0

The configuration is simple, this machine has 10.0.0.182 ip in my production network (eth0), and it the 10.1.0.1 ip in the internal virtual network (eth1). Now I configured all Windows machines in the internal virtual network to use this machine as gateway.

image

Figure 4: Configuration for Pre-Production TFS Machine

The important aspect is that it is using the ip in eth1 of linux machine as gateway (10.1.0.1), and it is using 10.0.0.42 as DNS (this is the address of my primary domain controller).

Now I can configure the linux box to become a router between the two networks, you should enable forwarding as first step with the instruction

echo 1 > /proc/sys/net/ipv4/ip_forward

But this works only until you reboot the linux machine, if you want the configuration to survive a reboot you can edit /etc/sysctl.conf and change the line that says net.ipv4.ip_forward = 0 to net.ipv4.ip_forward = 1. When forwarding is enabled, you can configure iptables to route. Here is the configuration:

Disclaimer: I’m not absolutely a linux expert, this is a simple configuration I’ve done after studying a little how iptables works and thanks to articles around the web.

sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o eth1 -m state  --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
sudo iptables -I FORWARD -d 10.0.0.0/24 -j DROP
sudo iptables -I FORWARD -d 10.0.0.42 -j ACCEPT

The first three instructions are standard rules to configure iptables to act as a router for ALL Traffic between eth0 and eth1 and vice-versa. With only the first three rules all the machines in 10.1.0.0/24 network that uses 10.1.0.1 (the linux box) as gateway can access internet as well as ALL machine in network 10.0.0.0/24 (Production Network). This is not enough for me, because with this configuration machine in Cloned TFS Environment has FULL access to production machines.

The fourth rule tells to iptables to DROP all traffic from any direction to subnet 10.0.0.0/24. This rule will completely isolates the two network, no machine can access the 10.1.0.0/24 from 10.0.0.0/24 and vice versa. Then the fifth and last rules tells iptables to ACCEPT all traffic from and to the address 10.0.0.42, my domain controller.

Thanks to iptables and a Linux box, it is really easy to create a router that selectively filter access from the two networks. This gives you the freedom to decide what machine of production environment can be accessed by cloned environment.

With this configuration I have created an isolated network that is capable of contacting ONLY my domain controller 10.0.0.42 but otherwise is COMPLETELY isolated from my production network. This allows PreProduction / Test TFS machine to join the domain and validate users, but you can safely launch build or whatever you want in any machine of the Cloned environment because all traffic to production machine, except the domain controller, is completely dropped.

How can I access PreProduction environment from client machine

Previous configuration solves only one of my two problems, PreProduction TFS can now access only selected machine of the domain (Domain Controller usually is enough) but how can you let Developers or Manager to access PreProduction environment to test cloned instance? Suppose a developer is using the 10.0.0.1 machine in production network, and he want to access PreProduction TFS at 10.1.0.2 address, how can you have it access without forcing him to connect to Hyper-V host and then use the Hyper-V console?

First of all you need to tell iptables to allow traffic between that specific ip and the isolated virtual network in eth1.

sudo iptables -I FORWARD -d 10.0.0.1 -j ACCEPT

This rules allow traffic with client ip so packets can flow from 10.0.0.1 to every machine in 10.1.0.0/24 network. This is necessary because we tell iptables to DROP every traffic to 10.0.0.0/24 except 10.0.0.42, so you need this rule to allow traffic to client Developer machine. All other machine in production network are still isolated.

Now the developer in 10.1.0.0 still can’t reach the 10.1.0.0 machine because it is in another subnet. To allow this he simply need to add a route rule in his machine. Supposing that the 10.0.0.1 machine is a standard Windows machine, here is the command line the developer need to access Cloned Environment Machines.

route ADD 10.1.0.0 MASK 255.255.255.0 10.0.0.182

Thanks to this roule the developer is telling to the system that all traffic to 10.1.0.0/24 subnet should be routed to 10.0.0.182, the address of the Ubuntu Linux machine in production environment. Now when developer try to RDP the 10.1.0.2 machine (Cloned TFS Server) all traffic is routed by the linux machine.

Final Consideration

Thanks to this configuration all machines in the 10.1.0.0/24 network, can contact and be contacted only by selected production machines, avoiding unwanted corruption of your production environment.

This gives you complete contorol on the IP addresses that can access your Cloned environment, reducing the risk of Production Environment corruption to almost zero. You can allow access to selected machine, and also you can control which client machine in your production network can access Cloned Environment.

Remember that, after a reboot, all rules in iptables will be cleared and you need to setup them again. You can configure Linux box to reload all rules upon reboot, but for this kind of environment, I prefer to have the ability to reboot Linux machine to completely reset iptables. Re-applying rules is a matter of couple of seconds.

Gian Maria.

Create a Pre-Production / Test environment for your TFS

There are a lot of legitmate reasons to create a clone of your TFS installation: verifying an upgrade, testing some customization and so on, but traditionally creating a test environment is not an easy task.

The problem is avoiding that the test installation could interfere and corrupt your production instance and since TFS is a complex product, there are a series of steps you need to do to perform this kind of operations. Thankfully with the upcoming version of TFS most of the work will be accomplished with a wizard.

Kudo to TFS Team for including a wizard experience to create a clone of your TFS Environment.

Here are the detailed steps to create a clone environment.

Step 1: Backup Database / install TFS on new Server / Restore Database

First of all you can login to your TFS server, open c:\Program Files\Microsoft Team Foundation Server 14.0\Tools and launch TfsBackup.exe to take a backup of all databases.

2016-07-09_11-16-26

Figure 1: Take a backup of your Production Database

You should only specify the name of the instance of SQL Server where you have your production databases. A wizard will start asking you to select databases to backup and the location where you want to place the backup.

2016-07-09_11-17-23

Figure 2: Choose databases to backup

The backup routine will perform a full backup.

2016-07-09_11-22-06

Figure 3: Backup is taken automatically from the routine

Next step is creating a new Virtual Machine, install Sql Server in a compatible version with TFS “15” preview (I suggest SQL 2016) then install TFS.

2016-07-09_12-20-57

Figure 4: Install TFS on the target machine

Once the installer finishes TFS “15” Configuration wizard will appear

2016-07-09_12-31-08

Figure 5: Once installer is complete the Configuration Wizard will ask you to configure the server

Now you should go to c:\Program Files\Microsoft Team Foundation Server 15.0\Tools and launch TfsRestore.Exe.

2016-07-09_12-31-42

Figure 6: TfsRestore will perform database restore

You should only choose the name of the SQL Server instance you want to use, in this example I’m creating a Pre-Production environment composed by only one machine called RMTEST. You should transfer backup file to target computer or place them in a network share accessible from the Target machine.

2016-07-09_12-33-53

Figure 7: Restore routine will prompt you for Backup Location

Once you specify the directory with the backup the wizard will automatically list all the database to restore for you.

2016-07-09_12-36-00

Figure 8: Database are restored in Sql Server

Step 2: Extra security precautions to avoid Production corruption

Now all databases are restored in the Sql Server that will be used by the Pre-Production environment and you can start TFS configuration wizard, but I’ll wait to perform extra security precautions.

You should edit the hosts file of Pre-Production machine to redirect to an inexistent IP every machine name used in Production Environment. As an example, I have Build and Release definition that will deploy software on demo machines, and I want to prevent that a build triggered on Pre-Production TFS Instance will access Production servers.

As extra security tip, I suggest you to use the hosts file trick to minimize the risk of Production Environment corruption

image

Figure 9: Editing hosts file will guarantee extra safety net against Production environment corruption

As an example, in Figure 9 I showed a typical hosts file, the Production instance is called TFS2013PreviewOneBox so I redirect this name to localhost in the new name. Then I redirect all machines used as deploy target or build server etc to 10.200.200.200 that is an unexistent IP.

You can also create some network rules to isolate the Pre-Production machine from the Production Environment completely, such as placing it in another network segment and prevent routing entirely, but using the hosts file is a simpler approach that works well for small and medium installation.

Step 3: Perform the configuration with new TFS “15” wizard

Before TFS “15” now you should resort to command line trickery to change server id from database etc, but luckily you can do everything using configuration wizard. Lets come back to Configuration Wizard and choose the option “I have existing database to use … “

2016-07-09_12-36-22

Figure 10: Start upgrade wizard using existing databases

The wizard will prompt you to choose the instance of Sql Server and the database to use.

2016-07-09_12-36-44

Figure 11: Choose database to use for the upgrade

Until now it is the standard Upgrade Wizard, but the next screen is the great news of this new installer because it will present you the option to create a Pre-Production environment.

2016-07-09_12-42-20

Figure 12: This is the new option, you can choose to create a Pre-Production Upgrade Testing

Pressing Next you will see another screen that reminds you the step that the wizard will perform to create the Clone environment. As you can see the wizard will take care of remapping connection string, changing all identifiers and remove all scheduled backup jobs.

2016-07-09_12-42-37

Figure 13: Overview of the Pre-Production scenario

Thanks to the wizard you can create a Test Clone of your Production TFS without worrying to corrupt your Production environment. The wizard will takes care of everything

Now the wizard will continue, but there is another good surprise, each screen contains suggestions to minimize risk of Production Environment corruption.

2016-07-09_12-44-40

Figure 14: Wizard suggests you to use a different user to run TFS Services

The suggestion in Figure 14 is the most important one, I usually use an account called TfsService to run my TFS Server and that account has several privileges in my network. In the Pre-Production environment it is better to use standard Network Service account or a different account. This is a really important security settings, because if the Pre-Production server will try to perform some operations on other servers it will probably be blocked because the account has no right permission.

Never use for Pre-Production environment the same users that you use for Production environment to minimize risk of corruption. Use Network Service or users with no privilege in the network created specifically for Cloned Environments

Clearly the wizard will suggest you to use different url than production server. Resist the temptation to use the same url and rely on hosts file redirection, it is really better to use a new name. This will allows you to communicate this new name to the team and ask them to access Pre-Production server to verify that everything is working, as an example after a test upgrade.

2016-07-09_12-45-02

Figure 15: Use a different url than production environment

You can now follow the wizard, basically the screen are the same of the upgrade, but each screen will suggest you to use different accounts and different resources than production instance.

At the end of the wizard you will have a perfect clone of your production environment to play with.

2016-07-09_12-56-10

Figure 16: Configuration is finished, you have now a Clone of your environment.

Step 4: extra steps for furter extra security

If you want to be Extra sure that your production environment is safe from corruption, configure the firewall of your Production system to block any access from IP of any machine part of cloned environment. This extra security measure will prevent human errors.

Some customers have custom software that connect to TFS instance to perform some custom logic. As an example you could have software that use bisubsribe.exe or hooks to listen to TFS events then send command to TFS. Suppose you want to test this kind of software against your Cloned environment, so you let people install and configure everything on Pre-Production machine, but someone did a bad mistake and configured the software to listen to the Pre-Production environment, but send command against Production Environment. If you blocked all traffic from Pre-Production machines to your TFS Production environment you will be protected against this kind of mistake.

If you are good in networking, probably the best solution is creating all machines part of Pre-Production environment (TFS, SQL, build server, etc) in another network segment, than configure routing / firewall to allow machines in pre-prod network to access only domain controllers or in general to access only machine that are stricly needed. This will prevent machines from Pre-Production environment to connect any machine of your Production Environment. You can then allow selected ip from your regular network to access Pre-Production for testing.

Gian Maria