TFS 2018 Update 3 is out, what changes for Search

TFS 2018 Update 3 is finally out and in release notes there is a nice news for Search functionality, basic security now enforced through a custom plugin. Here is what you can read in release notes

Basic authorization is now enabled on the communication between the TFS and Search services to make it more secure. Any user installing or upgrading to Update 3 will need to provide a user name / password while configuring Search (and also during Search Service setup in case of remote Search Service).

ES is not secured by default, anyone can access port 9200 and interact with all the data without restriction. There is a commercial product made by ElasticSearch Inc to add security (Called Shield), but it is not free.

Traditionally for TFS search servers, it is usually enough to completely close port 9200 in the firewall (if the search is installed in the same machine of Application Tier) or to open the port 9200 of Search Server only for Application Tiers instances if Search services are installed on different machine, disallowing every other computer of the network to directly access Elastic Search instance.

Remember to always ensure minimum attack surfaces with a good Firewall Configuration. For ElasticSearch the port 9200 should be opened only for TFS Application Tiers.

Here is the step you need to perform when you upgrade to Update 3 to install and configure search services: first of all in your search configuration you can notice a warning sign, nothing was really marked as wrong, so you can teoretically move on with configuration.

2018-09-05_13-14-38

Figure 1: Search configuration page in TFS Upgrade wizard, notice the warning sign and User and Password fields

When you are in the review pane, the update process complain for missing password in the Search configuration (Figure 1). At this point people get a little bit puzzled because they do not know what to user as username and password.

2018-09-05_13-15-00

Figure 2: Summary of upgrade complains that you did not specified user and password in search configuration (Figure 1)

If you move on, you find that it is Impossible to prosecute with the update because the installer complains of a missing ElasticSearch plugin installed.

The error ElasticSearch does not have a plugin AlmSearchAuthPlugin installed is a clear indication that installation on Search server was outdated.

2018-09-05_13-22-37

Figure 3: During Readiness check, the upgrade wizard detect that search services installed in the search server (separate machine) missed some needed components.

The solution is really simple, you need to upgrade the Search component installation before you move on with Upgrading the AT instance. In my situation the search server was configured in a separate machine (a typical scenario to avoid ES to suck up too resource in the AT).

All you need to do is to copy search installation package (You have a direct link in search configuration page shown in Figure 1) to the machine devoted to search services and simply run the update command.

2018-09-05_13-25-32

Figure 4: With a simple PowerShell command you can upgrade the installation of ElasticSearch in the Search Server.

The –Operation update parameter is needed because I’ve already configured Search services in this server, but for Update 3 I needed also to specify a user and password to secure your ES instance. User and password could be whatever combination you want, just choose a secure and long password. After the installer finished, all search components are installed and configured correctly; now  you should reopen the Search configuration page (Figure 1) in the upgrade wizard, specify the same username and password you used during the Search Configuration and simply re-run readiness checks.

Now all the readiness checks should pass, and you can verify that your ElasticSearch instance is secured simply browsing port 9200 of your search server. Instead of being greeted with server information you will be first ask for user and password. Just type user and password chosen during Search component configuration and the server will respond.

2018-09-05_13-29-08

This is a huge step to have a more secure TFS Configuration, because without resorting to commercial plugin, ElasticSearch is at least protected with basic authentication.

Remember to always double check your TFS environment for potential security problems and always try to minimize attack surface with a good local firewall configuration.

I still strongly encourage you to configure firewall to allow for connection in port 9200 only from TFS Application Tier machines, because is always a best practice not to leave ports accessible to every computer in the organization.

Gian Maria.

Welcome Azure DevOps

Yesterday Microsoft announced a change in naming for VSTS, now branded as Azure DevOps. You can read most of the details in this blog post and if you are using VSTS right now you will not have a big impact in the future. Event is this is just a rebranding of the service, there are a couple of suggestion I’d like to give you to have a smoother transition.

Visual  Studio Team Services was rebranded in Azure DevOps, this will not impact your existing VSTS projects, but it is wise to start planning for a smooth transition.

First of all, if you still don’t use the new navigation, I strongly suggests to enable it, because it will become the default navigation in the future, and it is best to gain familiarity with it, before it will become the only UI available.

SNAGHTML63aa86

Figure 1: Enable the new navigation for the account

The nice aspect is that you can enable new navigation only for your account, then enable for all accounts in the instance. This will make the transition smoother, you can find key member of your teams that wants to try new features, let them explore it and after some time let everyone use the new interface, knowing that at least some core members of the team are well used to it. Planning for a smooth transition instead of having big bang day when everyone can only use the new UI it is a wise approach.

Another suggestion is starting to use the new links right now, if your account is https://nablasoft.visualstudio.com, your new URI will be https://dev.azure.com/nablasoft and it is already available for all of your accounts. You can expect that the old URI will work for a really long time, but it is better starting to use the new URI as soon as possible, to avoid having link in the old format that maybe will cease to work some years from now.

Another part of the service that is affected by change of uri is remote address of git repositories. Microsoft assures that the old url will remain valid for a long time, but it is good to spend 1 minute updating remotes to never worrying that some day in the future remotes uri can break.

image

Figure 2: Change the url of origin to adapt to the new uri of Azure DevOps Repositories.

Updating git remote address is a good practice to immediately start using the new and official link.

Thanks to git, the only thing you need to do is grab the new link using the new UI, and use the command git remote set-url origin newlink to update uri of the remote to the new one, and you can continue work as ever (the first time you will be prompted by a login because you never authenticated git to dev.azure.com domain).

Happy VSTS oops 🙂 Happy Azure Devops

Gian Maria.

Copy Work Items between VSTS accounts / TFS Instances

This is a very common question: how can I copy Work Items information from a VSTS account to another account, or between VSTS and TFS. There are many scenarios where such functionality would be useful, but sadly enough, there is no option out-of-the box in the base product.

If you do not have images or other complex data inside your WI and you do not care to maintain history, you can simply create a query that load all the WI you want to copy, open it in excel with TFS / VSTS integration (be sure to select all columns of interest), then copy and past into another Excel instance connected to the destination project, press push and you are done. This is only useful if you do not care losing images contained in description of your Work Item and if you do not care about history, because Excel is not capable of handling images.

Copy with Excel is a quick and dirty solution to copy simple WI in less than few minutes, without the need of any external tool.

If you need more fidelity you can rely on a couple of free tools. One of the greatest advantage of TFS / VSTS is that everything is exposed with API and everyone can write tool to interact with the service.

A possible choice can be the VSTS Sync Migration Tools, if you read the documentation, this is what the tool can do for you.

  • Supports all currently supported version of TFS
  • Supports Visual Studio Team Services (VSTS)
  • Migrates work items from one instance of TFS/VSTS to another
  • Bulk edits fields in place for both TFS and VSTS
  • Being able to migrate Test Plans an Suits from one Team Project to another
  • Being able to migrate Teams from one Team Project to another

It is a good set of feature and the tool is available with Chocolatey, so it is really simple to install. I’ve friend that used it and confirmed that it is really flexible and probably is the best option if you need high fidelity migration.

Another super useful function of this tool is the ability to map source fields to destination fields, an option that can allow you to copy Work Items between different Process templates. It comes with good documentation and it definitely a tool that worth a shot.

VSTS Sync Migration Tools is one of the most useful tool to move Work Items between Team Project. Mapping capabilities helps to change process template.

Another interesting option can be the VSTS Work Item Migrator tools, published on GitHub, it has many features, but is somewhat limited on the version supported, Source projects can be VSTS or TFS 2017 Update 2 or later and destination project can be only VSTS or TFS 2018 or later, so it is of limited usage if you have some older TFS instance. Nevertheless it migrates all Work Item Fields, attachments links git commit links and history, thus copied work items are really similar to source ones.

Gian Maria.

Be sure to use latest version of Nuget Restore Task in VSTS Build

If you have in VSTS some old build that uses Nuget restore task, it is time to check if you are using the new version, because if you still use the 0.x version you are missing some interesting features.

With VSTS build it is always a good habit to periodically check if some of the tasks have new version.

Here is as an example, how the version 0 is configured

image

Figure 1: Nuget restore task in version 0

In Figure 1 you see how you configure a standard nuget restore task, if you are using version 0 (2) it is time to upgrade, especially because the old version of the task is not so flexible about the version to use (3).

Before changing version, you need to know that the newer version of this task goes hands to hands with another new Task, the Nuget Tool Installer Task, that is designed to download and install a specific version of nuget that will be used by all subsequent task in the Phase.

image

Figure 2: Thanks to the NugetToolInstaller task you can ensure that a specific version of NuGet is present in the system and used by all subsequent tasks.

Nuget Tool installer also ensure that all nuget task will use the very same version.

After the Nuget Tool Installer you can simply configure Nuget Task in version 2, see Figure 3.

image

Figure 3: Nuget task version 2 in action

As you can see you do not need to specify any version, the version used will be the one installed by the Nuget Installer task, so you are always sure that the exact version of Nuget is installed and available for the build to be used.

As usual, if you have build that sits there for long time, take your time to check if some of the tasks are available in newer and more powerful version. If you wonder how you can immediately check if some of the tasks have newer version, simply check for a little flag near the description of the task, as shown in Figure 4.

image

Figure 4: A little flag icons warn you that you are using an old version of this task.

Gian Maria.

Load Security Alerts in Azure Application Insight

Security is one of the big problem of the modern web, business moved to web application and security become an important part of application development. One side of the problem is adding standard and proved procedure to avoid risk of basic attacks like SQL or NO SQL injection, but big part of security was programmed at application level.

If you are using SPA with some client framework like Angular and have business logic exposed with API, (Ex ASP NET Web API), you cannot trust the client, thus you need to validate every server call for authorization and authentication.

When API layer is exposed to a web application it is one of the preferred attack surfaces for your application. Never ever trust your UI logic to protect from malicious calls

One of the most typical attack is forging calls to your API layer, a task that can be automated with tools like BURP Suite or WAPT and it is a major source of problem. As an example, if you have some administrative page where you grant or deny claim to users, it is imperative that every call should be accepted only if the authenticated user is an administrator of the system. Application logic can become complicated, as an example you can have access to a document because it is linked to an offer made in France and you are the Area Manager for France. The more security logic is complex the more you should test it, but you probably cannot cover 100% of the situations.

In such scenario it is imperative that every unauthorized access is logged and visible to administrators, because if you see that there is a spike in forged requests you should immediately investigate, because probably your system is under attack.

Proactive security is a better approach, you should immediately be warned if something suspicious is happening in your application.

Once you determine that there was a problem in your app you can raise a SecurityAlertLog but then the question is: Where this log should be stored? How can I visualize all SecurityAlerts generated by the system? One of the possible solution is using Azure Application Insights to collect all alerts and use it to monitor trend and status of security logs.

As an example, in our web application, when the application logic determines that the request has security problem (ex, user trying to access resource he as no access to), a typical HTTP Response Code 403 (Forbidden) is returned. Angular application usually prevent such invalid calls, so whenever we found a 403 it could be a UI Bug (Angular application incorrectly request a resource current user has no access to) or it could be some malicious tentative of accessing a resource. Both of the situation is quite problematic from a security perspective, so we want to be able to log them separately from application log.

Security problems should not be simply logged with the standard log infrastructure, but should generate some more visible and centralized alert.

With Web API a possible solution is creating an ActionFilterAttribute that is capable of intercepting every Web API Call, it depends on a list of ISecurityAlertLogService, a custom interface that represent some component capable of logging a security alert. With such configuration we can let our Inversion of Control mechanism to scan all implementation of ISecurityAlertLog service available to the system. As an example we log security Exception in a specialized collection in Mongo Database and on Azure Application Insights.

image

Figure 1: Declaration of a Web API Filter capable of intercepting every call and send security logs to a list of providers.

An Action Filter Attribut is a simple class that is capable of inspecting every WebApi call. In this scenario I’m interested in inspecting what happened when the request ends

image

Figure 2: Filter is capable of inspecting each call upon completion and can verify if the return value is 403,

Thanks to the filter we can simply verify if the return code is 403 or if the call generates an exception and that exception is a SecurityException; if the check is positive someone is tampering with requests or the UI has a security bug, a SecurityLogAlert is generated and it is passed to every ISecurityAlertLogService

Thanks to Azure Application Insoghts we can track the alert with very few lines of code.

image

Figure 3: Traking to AI is really a few lines of codes.

We are actually issuing two distinct telemetry events, the first one is a custom event, it contains very few data: most important is name, equal to string SecurityProblem concatenated with property type of our AlertMessage. That property contains the area of the problem, like ApiCall or CommandCall, etc. We also add three custom properties. Of the three “Type” property is important because it helps us to separate SecurityAlertMessages events from all other events that were generated by the system; user and ip address can be used to further filter security alerts. The reason why we are using very few information is that an Event is a telemetry object that cannot contain much data, it is meant to simply track something that happened in the system.

Then we add a Trace telemetry event, because a trace can contain much more information; basically we are tracing the same information of the Event, but the message is the serialization of the entire AlertMessage object, that can contain lots of information thus is not suited for a Custom event (if the custom event is too big it will be discarded)

With this technique we are sure that no SecurityAlert Custom Event will be discarded due to length (we are tracing minimum information) but we have also full information with TrackTrace. If a trace is too big to be logged we will miss it, but we will never miss an event.

image

Figure 4: Telemetry information as seen from the Visual Studio Application Insight Query window.

As you can see, I can filter for Type : SecurityAlertMessage to inspect only events and tracing related to security, I have my events and I can immediately see the user that generates the events.

Centralizing the log of multiple installation is a key point in security, the more log location source you need to monitor, the more is the probability that you miss some important information.

An interesting aspect is that when the software initialize TelemetryClient it adds the name of the customer/installation, so we can have a single point where all the logs of every customer is saved. From the telemetry we can filter for customer or immediately understand where the log was generated.

image

Figure 5: Customer properties allow us to send all telemetry events to a single Application Insight instance from multiple installation.

With a few lines of code we now have a centralized place where we can check security alert of our installations.

Gian Maria Ricci