Grant right to use $eval on Mongodb 3.2

One of the side effect of enabling authorization on MongDb is that, even if you create a user with “root” right, this account is not able to execute the $eval command. The simpthom is, when you try to execute $eval you got this error

mongodb Command '$eval' failed: not authorized on jarvis-framework-saga-test to execute command

This happens because $eval is somewhat deprecated, and it should not be used. Since it is a dangerous command, a user should have access to all action on all resources, and you need to create a role that has anyAction on anyResource.

If you really need to use $eval, you should create a role, just connect to the admin database and create a new role with the command.

		role: "executeEval", 
		privileges: [ { 
			resource: { anyResource: true }, 
			actions: [ "anyAction" ] } ], 
		roles: []
 } ) 

Now that you have this new role, just add to all the users that need to use $eval, as an example, if you have a single admin user in admin database, just run this against the admin db.

db.grantRolesToUser("admin", [ { role: "executeFunctions", db: "admin" } ])

And now the admin user can execute $eval against all databases.

Gian Maria.

Secure your MongoDb installation

In last months a lots of rumor spreads about MongoDb and Data Leak because people found lots of MongoDb exposed on the internet without any protection.

The root of the problem is probably a bad default for MongoDb that actually starts without any autentication by default. Developers usually download mongodb, configure without authentication and access MongoDb instance without any knowledge of MongoDb security model. This simplicity of usage can lead to unsecure installation in production.

While this can be tolerable for MongoDb instances that lives in intranets, it is always not a good strategy to leave MongoDb completely unauthenticated.  It turns out that enabling a really basic authentication is really simple even in the community edition.

Once you started your MongoDb instance without authentication just connect with your tool of choice (ex robomongo) and create a user admin in the admin database.

use admin
    user: "admin",
    pwd: "mybeautifulpassword",
    roles: [ { role: "root", db: "admin" } ]

Once this user is created, just stop MongoDb, change configuration to enable authentication.

   authorization: enabled

If authorization is enabled in the configuration file, MongoDb requires that all of your connection to the server is authenticated. There is a nice tutorial in MongoDb site, but basically once authorization is enabled you can authenticate on a single database or to the admin db. With the above instruction I’ve created a user admin on the admin database with the role root. This is the minimum level of authentication you should have, a single user that can do anything. 

This configuration is far to be really secure, but at least avoid to access MongoDb instance without password. It is equivalent to enable only the user “sa” on a Sql Server.

The next step is changing your connection string inside your sofware to specify user and password. The format of the url is this:


As for native authentication in Sql Server, username and password are stored in connection string, and pay attention to the authSource parameter of the connection string. If you omit that parameter C# driver try to authenticate against specified database (newDb in this example) and it fails because the only user is in the admin database. Thanks to the authSource parameter you are able to specify the database to use to authenticate.

You don’t need to change anything else in your code, because the connectionstring contains all the information to authenticate the connection.

To avoid having unsecure instance of mongoDb in production, starts immediately to secure database directly during developing phase, so every person included in the process knows that he need a password to access the database.

Gian Maria.

Decide when Castle.Windsor Startable Facility starts your components

Castle.Windsor Startable facility is a nice facility that automatically starts component that implements a specific interface (IStartable) or components registered with specific extensions method (ex StartUsingMethod). 

This approach is really nice, and the Facility has different way to work, the old aggressive mode that try to start a component immediately after is registered, and another, more useful, that starts a component only when all its dependencies are registered. This feature is helpful, because it avoid you to worry about order of registration.

Basically the problem is: if component A implements IStartable and depends from service B and service B is registered after A, Houston we have a problem.  In that scenario the Startable facility try to Instantiate A to call Start(), but A cannot be resolved because it still misses B dependency. To avoid this problem Startable facility support a deferred start, where A is instantiated (and started) only after all dependencies (in this scenario B) are correctly registered.

But this is not enough in some scenario. I have a problem because I not only need that the component is started after all dependencies are registered, but I want also to be sure that the component is started after I’ve started Rebus IBus interface.

Generally speaking there are a lot of legitimate situations where you want to control WHEN the Startable Facility actually instantiate Startable Components.

A standard solution is not using the facility at all, when you want to start IStartable components, you can simple scan all registered components in Castle to find those ones that implements IStartable, create and intsantiate it.

This approach is wrong, because it has a couple of problem: the first one is that it does not work for components registered with StartUsingMethod fluent interface, the second problem is that the startable facility also takes care of calling stop during decommition phase.

To overcome this problem you can write a modified version of Startable Facility, with a simple method that has to be manually called to start everything. Here is a possible implementation

public class MyStartableFacility
    : AbstractFacility
	private ITypeConverter converter;
    protected override void Init()
        converter = Kernel.GetConversionManager();
        Kernel.ComponentModelBuilder.AddContributor(new StartableContributor(converter));

    public void StartAllIStartable()
        IHandler[] handlers = Kernel.GetAssignableHandlers(typeof(object));
        foreach (var handler in handlers)
            if (typeof(IStartable).IsAssignableFrom(handler.ComponentModel.Implementation) ||

    public static bool IsStartable(IHandler handler)
        var startable = handler.ComponentModel.ExtendedProperties["startable"];
        var isStartable = (bool?)startable;
        return isStartable.GetValueOrDefault();

Most of this code is actually taken from the original implementation of the facility. It actually add the StartableContributor to the kernel as for the original IStartable interface, but it does not start anything automatically. To start everything you need to call StartAllIStartable method, that simply scans all registered component, and if it is startable it just resolve the object.

Actually all the dirty work is done by basic accessories classes of IStartable (StartableContributor) and you only need to resolve IStartable object to have everything works as expected. To understand if a component is IStartable you can simply check for Extended property “startable” that is inserted by the StartableControbutor.

Gian Maria.

No agent could be found with the following capabilities

In TFS 2015 / VSTS new build system each task contains a series of requirements that needs to be matched by agents capabilities for the task to run. Usually you install Visual Studio in the machine with the build agent and you can schedule standard .NET builds without problem, but what happens when the build starts to evolve?

When you start creating more complex build, you can find that your agent does not meets requirements because it miss some of the required capabilities. As an example, in TFS 2015 Build I’ve added task to run Sonar Qube Analysis on my code.

Build definition with Sonar Qube analysis enabled

Figure 1: A build with SonarQube analysis enabled

Now if I queue a build manually, TFS warned me that it is not able to find a suitable agent, and if you ignore that warning and queue the build here is the result.

The build failed because no agent with required capabilities was found in the pool.

Figure 2: Build failed because no suitable agent was find

The build failed because there are no agent capable to run it. Now you should go to the Project Collection administration page and verify all the agents.

Thanks to Capabilities tab in the administration page I can see a complete list of all agent capability.

figure 3: View agents capabilities in administration page

From that picture I verified that the only agent I’ve configured missed the Java capability, so I simply remote desktop to the server, installed java on the machine and then restart the agent service (VSO-Agemt-TFS2013Preview). After agent is restarted the build rans just fine.

Thanks to the new build system, TFS Build Agents can automatically determines some known capabilities (such as Java installed) and this is atuomatically matched with all the requirements that are contained in tasks you use in the build, so TFS can choose to run the build only in the agent that satisfies requirements, and if no agent is found it warns you with a clear error.

Gian Maria.

Uploading custom Build Task to TFS 2015

In a previous article I wrote on how to write a custom Task For Visual Studio Team Services, but a usual question is: can I use the same technique to write a task to TFS 2015 on-premise?

The answer is yes, and it is really simple, thanks to this fantastic article by Jesse, that explain how to use Fiddler to being able to authenticate to on-premise TFS without the hassle of enabling basic authentication. Thanks to that article and Fiddler, you can simply login from tfx-cli to your TFS 2015 without any problem.


Figure 1: Login against your local TFS Service

Once I’m logged in, I can simply use the very same command used for VSTS to upload the directory where I defined the Build Task on my local TFS on-premise Server.


Figure 2: Task was uploaded to the server

As you can see the task was uploaded to the server, exactly in the very same way I uploaded to my VSTS account. The task is now available to be used in my TFS.


Figure 3: Custom task in action in a TFS 2015 on-premise build

Thanks to the extensibility with PowerShell I did not need to care about versions of VSTS or TFS, because the script does not have reference on any dll or package and the same task can be used both in VSTS or in TFS 2015 without changing a single line.

Thanks to the new Buidl System, extending a build for both VSTS and TFS is now a simple and easy task.

Gian Maria.