Visual Studio 2010 error connecting to TFS 2017

One of the reason why I always suggest of keeping TFS upgraded is the compatibility matrix. Microsoft ensure that old tools (like VS 2010) always can connect to latest version of TFS, but the opposite is not true. This means that new version of Visual Studio could have problem connecting to older instances of TFS.

Keeping your TFS updated will guarantee that you can use TFS with newer and older tooling (yes, even Visual Basic 6 can work with TFS 2017)

But even if the compatibility matrix confirm that you can use VS 2010 to connect to TFS 2017, you probably need to install some additional software, to guarantee the connection. As an example you can receive the following error when you try to connect to TFS 2017 TFVC repository from VS 2010.

The user name <guid> is not a fully-qualified name, parameter name workspaceOwner

This error happens because you lack some required update to connect to TFS 2017 from VS 2010. For VS 2010 to connect to VSTS / TFS 2017 these are the steps you need to take.

  1. Visual Studio 2010
  2. Team Explorer 2010
  3. Visual Studio 2010 SP1
  4. Visual Studio 2010 GDR for Team Foundation Service
  5. Visual Studio 2010 Compatibility Update for Windows 8 and Visual Studio 2012

This sequence of steps is taken from the Exceptional link of Jesse Houwing . This link will specify you all service pack and patch you need to install to connect to various version of TFS. I strongly suggest you to bookmark that link, because it can really save you when you have old client (like VS 2010, VS 2012) that experiences difficulties connecting to the newest version of TFS / VSTS.

Gian Maria.

Check pull request with build, without enforcing pull request

With TFS / VSTS Build system it is possible to configure Git to require that a specific branch is protected, and you need to use Pull Requests to push code into it, and the pull request can be accepted only if a specific build is green. Here is the typical configuration you can do in admin page for your Git repositories.


Figure 1: Branch policies in VSTS/TFS

In Figure 1 it is represented the configuration for branch policies; in this specific configuration I require a specific build to run whenever a member create a pull request against develop branch. The effect is: if a developer try to directly push to develop branch, the push is rejected.


Figure 2: Push was rejected because Branch Policies are defined for develop branch.

Branch Policies are used to force using pull requests to reintegrate code to specific branches, and sometimes it could be too restrictive. For small teams, it could be perfectly reasonable to avoid always forcing pull request and letting the owner of the feature branch to decide if the code needs a review. Such relaxed policies is also used when you start introducing Pull Request to the team, instead of telling everyone that they will be completely unable to push against a branch except from a Pull Request, you can gradually introduce Pull Request concept doing Pull Request only for some of the branches, so the team can be familiar with the tool.

Always be gentle when introducing new procedure in your process, being too restrictive usually creates resistance, unless the new procedure is enforced and requested by all  members of the team

Luckily enough it is simple to obtain such result, in Figure 1 you should see an option “Block Pull Requests Completion unless there is a valid build”. Simply unchecking that checkbox will allows you to normally push on develop branch, but if you create a pull request, you will have a build that verify the correctness of the code.

Gian Maria.

Using special agent pool for special builds

When you use Build to generate artifact for installation or whenever you need a build to validate code with tasks that are not easily runnable on client machine you can have delay to install patches to your production system.

Lets examine this situation: You have a build that produces artifacts for installation and upload artifacts to VSTS. Then with Release Management you have release plan that deploy in production. What happens when you need to deploy an hotfix in production?

If your deploy is fully automated and you cannot take manual shortcut, you need to have fast build-deploy track for hotfix

In some scenario you can build locally and patch production system manually, but especially when you are in the cloud or with distributed system, you need to rely on your deploy script, not doing anything manually.

To speedup deploy of patch you usually configure reduced pipelines (deploy directly to prod without dev/test/preprod/deploy path) but you need to have artifacts as soon as possible. The problem is that you can queue a build for your hotfix branch but

  • Your build is queued after many other builds
  • Your build needs lots of time to execute

This is not a problem if you configure your project with an option to build locally. Such scenario is perfectly fine, but if you have a build that produces artifacts, the only option to have a reliable system to execute a local build, is when the build system runs the very same set of scripts that are run locally. It is a really bad idea to have your build system use a different set of operation than your local build. This approach can be feasible, but often you loose some build system specific capabilities because build script should not rely on build infrastructure. (the usual problem is, how the build grab test results, or logs).

The simplest way to have a predictive way to generate artifacts, is using builds, because your artifacts are produced with the very same set of operation and from a predefined set of configuration controlled machine (agents)

With such a scenario, if you want to have the ability to run High Priority builds, you usually should have a different agent pool, where you do not schedule any build and have at least one agent always ready to execute your build.

As an example, here is a pool called Solid, that is composed only by computer capable of running the build on a Solid State Disk.


Figure 1: Create a special build for high priority builds.

Usually build agents are configured with standard virtual machine that does not operates with SSD, and this is perfectly acceptable for standard build.

Whenever you need a fast build, you want to have agents that are capable of squeezing out everything it can to execute the build as fast as possible. One of the possibility is using even dev machines. Actually my developing computer is an i7 7700K, with 32GB of ram and a Samsung 960 NVMe disk, I have also a development instance of MongoDb that runs on memory instead that on disk. This will allows builds that does testing against MongoDb really faster than a build machine that runs agent on a standard disk.

Installing and configuring an agent is really simple, and the agent is completely idle so it does not bogus down your dev machine, but whenever you need to trigger a fast build it is ready to be used. If you have a small office, you can simply ask the dev not to schedule heavy task on pc for the time of the build and you have high priority build super-fast system for free. If you are a big company, probably it is better to have a dedicated super-fast machine that is ready to execute fast build.

As for firefighter, you do not want to have all of your firefighters busy when you need them, and it is perfectly fine to keep them idle, ready to operate. At the same way, it is better to have an idle superfast machine to execute high priority builds if you need.

In such a situation remember to configure Agent Maintenance, because it is unlikely that you have big disks. After all if a machine is used rarely and only to do a fast build, a 256 GB ssd is more than enough.

Gian Maria.

Maintenance for build agent in TFS Build

Each TFS Build agent uses a local directory to download source, do build, prepare artifacts and if you have really high number of builds, you could run out of space in agent disks.

To minimize this problem VSTS contains a Settings tab in pool configuration that allows scheduling of Agent Maintenance job as you can see in Figure 1.


Figure 1: Enabling agent schedule mainteinance

When an agent perform maintenance basically it deletes all working directory that were not used for more than a certain number of days (default 30). This will allow disk cleanup and you will not waste space for old builds or builds that were migrated to other pools.

This setting is especially useful for all pools that are running on SSD or NVMe disk, where usually you can schedule build manually when you really need that a build should be executed really fast. Having pools with High End hardware allows you to manually schedule high priority build on fast build machine, but this usually means that agents directory space is used by build that probably will not be scheduled for a long time. In that scenario you can configure a much smaller number of days before deleting a directory, like 2 or 3.

Gian Maria Ricci.

Update GitVersion for large repositories

As you know I’m a fanatic user of GitVersion in builds, and I’ve written a simple task to use it in a TFS Build automatically. This is the typical task that you write and forget, because it just works and you usually not have the need to upgrade it. But there is a build where I start to see really high execution timing for the task, as an example GitVersion needs 2 minutes to run.


Figure 1: GitVersion task run time it is too high, 2 minutes

This behavior is perfectly reproducible on my local machine, that repository is quite big, it has lots of tags, feature branches and seems that GitFlow needs lot of time to determine semantic versioning.

Looking in the GitHub page of the project, I read some issue regarding performance problem, the good part is that all performance problem seems to disappear with the newer version, so it is time to upgrade the task.

Thanks to the new build system, updating a task in VSTS / TFS is really simple, I just deleted the old GitVersion executable and libraries from the folder where the task was defined, I copied over the new Gitversion.exe and libraries and with a tfx build tasks upload command I can immediately push the new version on my account.

Since I changed GitVersion from version 2 to version 3, it is a good practice to update the Major number of the task, to avoid all build to automatically use the new GitVersion executables, because it could break something. What I want is all build to show me that we have a new version of the task to use, so you can try and stick using the old version if Gitversion 3.6 gives you problems.

Whenever you do major change on a TFS / VSTS Build task it is a good practice to upgrade major number to avoid all builds to automatically use the new version of the task.

Thanks to versioning, you can decide if you want to try out the new version of the task for each distinct build.


Figure 2: Thanks to versioning the owner of the build can choose if  the build should be upgraded to use the new version of GitVersion task.

After upgrading the build just queue a new build and verify if the task runs fine and especially if the execution time is reduced.


Figure 3: New GitVersion executable have a real boost in performance.

With few minutes of work I upgraded my task and I’m able to use new version of GitVersion.exe in all the builds, reducing the execution time significantly. Comparing this with the old XAML build engine and you understand why you should migrate all of your XAML Build to the new Build System as soon as possible.

Gian Maria.