How to add a user to Project Collection Service Account in TFS / VSO

VSO and TFS have a special group called: Project Collection Service Account that has really powerful permission, and usually no user should be part of that group. There are specific circumstances, like running TFS Integration platform to move code to TFS, where the account used to access VSO needs to be part of this group to temporary have special permission.

Sadly enough, the UI does not allow you to directly add a user to that group, because the add button is disabled if you select that group.


Figure 1: You cannot add users or group to Project Collection Service Account Users directly from the ui.

The reason behind this decition is security, adding a user to this group is not part of everyday operation, users in that groups has really powerful permissions and you should add users to Service Accounts only in really specific situations and only when really required. This is the reason why you need to resort to Command Line.

	"Project Collection Service Accounts" 

TfsSecurity.Exe command line utility can add whatever users to whatever group, bypassing limitation in the UI. Remember than to remove the user from that group when he does not need anymore special permission; the commandline is the same as previous one just change /g+ to /g-

As a rule of  thumb, users should only be added to Service Account group only if strictly required, and removed from that group immediately after the specific need ceased to exist.

In older version of VSO / TFS you could obtain the same result without command line in the UI. You just selected the user you want to add to Service Group, then go to the member of section and then, pressing plus button, add the user to the group, but this is actually disabled in actual version.


Figure 2: You cannot add anymore a user directly to a group.

If you really want to avoid command line, you can still use the UI. Just create a standard TFS Group and then add the group to the Project Collection Service Accounts. First step: create a group with a Really Explicit Name.


Figure 3: This group has a specific name that immediately tells to the reader that it is a special group.

Once the group is created, you can simply add it to the Project Collection Service Account group with few click.


Figure 4: Add new group to the Project Collection Service Accounts group

Now you can simply add and remove users to the “WARN – Service Account Users” group from the UI when you need to grant or remove Service Account Permission.

Gian Maria Ricci.

Publishing a Nuget package to Nuget/Myget with VSO Build vNext

Publishing a package to myget or nuget with a TFS/VSO vNext build is a breeze. First of all you should create a .nuspec file that specify everything about your package and include it in your source control. Then Add a variable to the build called NugetVersion as shown in Figure 1.

Adding NugetVersion variable to the list of variables for this build.

Figure 1: Added NugetVersion variable to build definition.

In this build I disabled continuous integration, because I want to publish my package only when I decided that the code is good enough to be published. Publishing to a feed for each build is usually a waste of resources and a nice way to make the history of your package a pain. Since I want to do manual publishing I’ve checked the “Allow at Queue Time” checkbox, to be able to change Nuget Version Number at queue time.

Build vNext has a dedicated step called NugetPackager that takes care of creating your package from nuspec file, so you do not need to include nuget.exe in your repository or in the server. If you are curious where is nuget.exe stored, you can check installation folder of your build agent, and browse the Task Directory where all the tasks are contained. There you should find the NugetPackager folder where all the script used by the tasks are stored.

How to configure Nuget Packager to create your package.

Figure 2: Added Nuget Packager step to my build.

You can use wildchars as pattern to nuspec files; as an example you can specify **\*.nuspec to create a package for all nuspec file in your source directory. In this example I’have multiple nuspec in my repository, but I want to deploy only a specific package during this build, so I’ve decided to specify a single file to use. Thanks to the small button with ellipsis at the right of the textbox, you can choose the file browsing the repository.

Thanks to source browsing you can easily choose your nuspec file to create package.

Figure 3: Browsing source files to choose nuspec file to use.

Then I’ve choose $(Build.StagingDirectory) as Package folder, to be sure that resulting nupkg file will be created in staging directory, outside of the src folder. This is important, because if you do not choose to clean src folder before each build, you will end with multiple nupkg file in your agent work directory, one for each version you published in the past. If you use StagingDirectory as destination for your nupkg files, it will be automatically cleared before each build. With this configuration you are sure that staging directory contains only .nupkg files created by current build.

Finally in the Advanced tab I’ve used the Nuget Arguments textbox to specify the -version option to force using version specified in the $(NugetVersion) build parameter.

The last step is including a step of type Nuget Publisher, that will be used to publish your package to nuget / Myget.

Configuration of NugetPublisher step to publish your package to your feed

Figure 4: Final publishing step to publish nuget to your feed.

If you use Staging Directory as output folder for your Nuget Package step, you can specify a pattern of $(build.stagingDirectory)\*.nupkg to automatically publish all packages created in previous steps. If you will change the build in the future adding other NugetPackager steps to create other packages, you can use this single Nuget Publisher to automatically publish every .nupkg file found in staging directory.

Finally you need to specify the Nuget Server Endpoint; probably your combobox is empty, so you need to click the Manage link at the right of the combo to manage your endpoint.

Manage endpoint in your VSO account

Figure 5: Managing endpoint

Clicking Manage link, a new tab is opened in the service tab of Collection configuration, here you can add endpoint to connect your VSO account to other service. Since Nuget or MyGet is not in the list, you should add a new service endpoint of type Generic.

Specify your server url and your api key to create an endopint

Figure 6: Adding endpoint for nuget or myget server

You must specify the server url of your feed and your API KEY in the Password/Token Key field of the endpoint. Once you press OK the endpoint is created; no one will be able to read the API KEY from the configuration and your key is secured in VSO.

Now all Project Administrators can use this endpoint in your Nuget Publisher step to publish against that feed, without giving them API KEY or password. All endpoints have specific security so you can specify the list of the users that will be able to change that specific endpoint or list of users that will be only able to Read that specific endpoint. This is a nice way to save details of your nuget feed in VSO, specifying the list of the user that can use this feed, without giving password or token to anyone.

When everything is done, you can simply queue a new build, and choose the version number you want to assign to your Nuget Package.

You can queue the build specifying branch and Nuget Numbering

Figure 7: Queuing a build to publish your package with a specific number.

You have the ability to choose the branch you want to publish, as well as the Number of your nuget package to use. Once the build is finished your package should be published.

Feed detail in MyGet account correctly list packages published by my vNext build

Figure 8: Your package is published in your MyGet feed.

In previous example I’ve used master branch and published version number 1.3.1. Suppose you want to publish a pre-release package with new features that are not still stable. These features are usually in develop branch (especially true if you use GitFlow with git repositories), and thanks to configuration you can simply queue a new build to publish pre-release package.

Specifing developing branch and a package number ending with beta1 you can publish pre-release packages.

Figure 9: Publish a pre-release package using develop branch and a nuget version that has a –beta1 suffix.

I’ve specified to use develop branch and a nuget version number ending with –beta1, to specify that it is a pre-release package. When the build is finished you can check from your visual studio that everything is ok.

Verify that in Visual Studio stable and Pre-Release package is ok.

Figure 10: Verify in Visual Studio that everything is ok.

Thanks to Build vNext, publishing your package to myget or nuget or private nuget feed is just a matter of including a couple of steps and filling few textboxes.

Gian Maria.

Why I’m not a great fan of LINQ query for MongoDb

I’m not a great fan of LINQ provider in Mongo, because I think that developers that start using only LINQ misses the best part of working with a Document Database. The usual risk is: developer always resort to LINQ queries to load-modify-save a document instead of using all powerful update operators available in Mongo.

Despite this consideration, if you need to retrieve full document content, sometimes writing a LINQ query is the simplest approach, but, as always, not every valid LINQ statement you can write can be translated to MongoQuery. This is the situation of this query.

//apply security filtering.
documentsQuery = documentsQuery
  .Where(d => d.Aces.Any(a => permittingAces.Contains(a)))
  .Where(d => !d.Aces.Any(a => denyingAces.Contains(a)));

I need to filter all documents, finding documents where Aces property (is a simple HashSet<String>) contains at least one of the aces in permittingAces list but should not contain any aces listed in denyingAces collection. While this is a perfectly valid LINQ query, if you try to issue it to Mongo you got a:

Any is only support for items that serialize into documents. The current serializer is StringSerializer and must implement IBsonDocumentSerializer for participation in Any queries.

You can use Any with sub-objects, but expressing an Any condition on an array of string is not supported. To overcome this limitation, .NET provider for MongDb provide a convenient ContainsAny extension operator to write previous query.

documentsQuery = documentsQuery
  .Where(d =&gt; d.Aces.ContainsAny(permittingAces))
  .Where(d =&gt; !d.Aces.ContainsAny(denyingAces));

This LINQ query works perfectly, and if you are curious how this query translated to standard MongoQuery, you can use the GetMongoQuery() method, as I’ve described in previous post.

This simple example shows you some of the limitation that you can encounter using LINQ provider in MongoDb, and my suggestion is to always prefer using standard MongoQuery because it gives you lots of more flexibility, especially for update operations.

Another reason in the past to stay away from the LINQ provider is that the older version of the driver, still used by large amount of persons, had a really bad implementation of the Select LINQ operator, because the projection is done client side, as stated here:


Select does not result in fewer fields being returned from the server. The entire document is pulled back and passed to the native Select method. Therefore, the projection is performed client side.

This is a great problem, because the whole document is always returned from the server, using more bandwidth and more resource server side. Remember that one of the standard optimization when you issue query to MongoDb instance is reducing the amount of field you are loading from your document. If you use old LINQ provider and you are doing Select to retrieve less field from the server, you are wasting your time, because you are loading always the whole document.

Gian Maria.

ProtocolViolationException when writing on response NetworkStream in c#

I have a piece of code that is returning data with an HttpListener, but I have intermittent error in logs

Bytes to be written to the stream exceed the Content-Length bytes size specified

I wonder where is the error, because the code is simply returning a string with proper encoding and the ContentLength64 property of the Response was set correctly.

byte[] buffer = Encoding.UTF8.GetBytes(message);
context.Response.ContentEncoding = Encoding.UTF8;
context.Response.ContentLength64 = buffer.Length;
context.Response.OutputStream.Write(buffer, 0, buffer.Length);

It turns out that some clients sometimes used to do HEAD requests and writing to an OutputStream of an HEAD request generates a ProtocolViolationException, because the client does not expects content to be returned. I’ve simple wrapped the above code with a simple If statement that does not write any content if the HttpMethod of the request is HEAD (context.Request.HttpMethod == “HEAD”) and all errors disappeared from log.

Gian Maria.

Start ElasticSearch in windows with a different configuration file

When you start elasticsearch double clicking on Elasticsearch.bat in windows, it uses the standard config/elasticsearch.yml files that is contained in the installation directory. Especially for development, it is really useful to be able to start ES with different configuration file.

Probably my googleFu is not perfect, but each time that I need to find the correct option to pass to Elasticsearch.bat batch file I’m not able to find with the first search and I always loose some time, and this means that probably this information is not indexed perfectly.

If you are interested the configuration option is called –Des.config and permits you to specify the config file used to start your ES Node.

elasticsearch.bat -Des.config=Z:\xxxx\config\elasticsearch1.yml

You can now create how many config file you need, and simply create multiple link to the original bat file with different config file to start ES with your preferred options.

Gian Maria.