Load Security Alerts in Azure Application Insight

Security is one of the big problem of the modern web, business moved to web application and security become an important part of application development. One side of the problem is adding standard and proved procedure to avoid risk of basic attacks like SQL or NO SQL injection, but big part of security was programmed at application level.

If you are using SPA with some client framework like Angular and have business logic exposed with API, (Ex ASP NET Web API), you cannot trust the client, thus you need to validate every server call for authorization and authentication.

When API layer is exposed to a web application it is one of the preferred attack surfaces for your application. Never ever trust your UI logic to protect from malicious calls

One of the most typical attack is forging calls to your API layer, a task that can be automated with tools like BURP Suite or WAPT and it is a major source of problem. As an example, if you have some administrative page where you grant or deny claim to users, it is imperative that every call should be accepted only if the authenticated user is an administrator of the system. Application logic can become complicated, as an example you can have access to a document because it is linked to an offer made in France and you are the Area Manager for France. The more security logic is complex the more you should test it, but you probably cannot cover 100% of the situations.

In such scenario it is imperative that every unauthorized access is logged and visible to administrators, because if you see that there is a spike in forged requests you should immediately investigate, because probably your system is under attack.

Proactive security is a better approach, you should immediately be warned if something suspicious is happening in your application.

Once you determine that there was a problem in your app you can raise a SecurityAlertLog but then the question is: Where this log should be stored? How can I visualize all SecurityAlerts generated by the system? One of the possible solution is using Azure Application Insights to collect all alerts and use it to monitor trend and status of security logs.

As an example, in our web application, when the application logic determines that the request has security problem (ex, user trying to access resource he as no access to), a typical HTTP Response Code 403 (Forbidden) is returned. Angular application usually prevent such invalid calls, so whenever we found a 403 it could be a UI Bug (Angular application incorrectly request a resource current user has no access to) or it could be some malicious tentative of accessing a resource. Both of the situation is quite problematic from a security perspective, so we want to be able to log them separately from application log.

Security problems should not be simply logged with the standard log infrastructure, but should generate some more visible and centralized alert.

With Web API a possible solution is creating an ActionFilterAttribute that is capable of intercepting every Web API Call, it depends on a list of ISecurityAlertLogService, a custom interface that represent some component capable of logging a security alert. With such configuration we can let our Inversion of Control mechanism to scan all implementation of ISecurityAlertLog service available to the system. As an example we log security Exception in a specialized collection in Mongo Database and on Azure Application Insights.

image

Figure 1: Declaration of a Web API Filter capable of intercepting every call and send security logs to a list of providers.

An Action Filter Attribut is a simple class that is capable of inspecting every WebApi call. In this scenario I’m interested in inspecting what happened when the request ends

image

Figure 2: Filter is capable of inspecting each call upon completion and can verify if the return value is 403,

Thanks to the filter we can simply verify if the return code is 403 or if the call generates an exception and that exception is a SecurityException; if the check is positive someone is tampering with requests or the UI has a security bug, a SecurityLogAlert is generated and it is passed to every ISecurityAlertLogService

Thanks to Azure Application Insoghts we can track the alert with very few lines of code.

image

Figure 3: Traking to AI is really a few lines of codes.

We are actually issuing two distinct telemetry events, the first one is a custom event, it contains very few data: most important is name, equal to string SecurityProblem concatenated with property type of our AlertMessage. That property contains the area of the problem, like ApiCall or CommandCall, etc. We also add three custom properties. Of the three “Type” property is important because it helps us to separate SecurityAlertMessages events from all other events that were generated by the system; user and ip address can be used to further filter security alerts. The reason why we are using very few information is that an Event is a telemetry object that cannot contain much data, it is meant to simply track something that happened in the system.

Then we add a Trace telemetry event, because a trace can contain much more information; basically we are tracing the same information of the Event, but the message is the serialization of the entire AlertMessage object, that can contain lots of information thus is not suited for a Custom event (if the custom event is too big it will be discarded)

With this technique we are sure that no SecurityAlert Custom Event will be discarded due to length (we are tracing minimum information) but we have also full information with TrackTrace. If a trace is too big to be logged we will miss it, but we will never miss an event.

image

Figure 4: Telemetry information as seen from the Visual Studio Application Insight Query window.

As you can see, I can filter for Type : SecurityAlertMessage to inspect only events and tracing related to security, I have my events and I can immediately see the user that generates the events.

Centralizing the log of multiple installation is a key point in security, the more log location source you need to monitor, the more is the probability that you miss some important information.

An interesting aspect is that when the software initialize TelemetryClient it adds the name of the customer/installation, so we can have a single point where all the logs of every customer is saved. From the telemetry we can filter for customer or immediately understand where the log was generated.

image

Figure 5: Customer properties allow us to send all telemetry events to a single Application Insight instance from multiple installation.

With a few lines of code we now have a centralized place where we can check security alert of our installations.

Gian Maria Ricci

How to security expose my test TFS on the internet

I’m not a security expert, but I have a basic knowledge on the argument, so when it is time to expose my test TFS on the outside world I took some precautions. First of all this is a test TFS instance that is running in my test network, it is not a production instance and I need to access it only sometimes when I’m outside my network.

Instead of mapping 8080 port on my firewall I’ve deployed a Linux machine, enabled SSH and added google two factor authentication, then I expose port 22 on another external port. Thanks to this, the only port that is exposed on my router is a port that remap on port 22 on my linux instance.

Now when I’m on external network, I use putty to connect in SSH to that machine, and I setup tunneling as for Figure 1.

image

Figure 1: Tunneling to access my TFS machine

Tunneling allows me to remap the 8080 port of the 10.0.0.116 machine (my local tfs) on my local 8080 port. Now from a machine external on my network I can login to that linux machine.

image

Figure 2: Login screen with verification code.

This is on a raspberry linux pi, I simply use pi as username, then use verification code from my cellphone (google authenticator app) and finally the password of my account.

Once I’m connected to the raspberry machine I can simply browse http://localhost:8080 and everything is redirected through a secure SSH tunnel to the 10.0.0.116 machine. Et voilà I can access any machine, any port in my network just using SSH tunneling.

image

Figure 3: My local TFS instance now accessible from external machine

This is surely not a tutorial on how to expose a production TFS instance (please use https), but instead is a simple tutorial on how you can access every machine in your local lab, without the need to expose directly the port on your home router. If you are a security expert you will probably find flaws in this approach, but surely it is better than directly map ports on the router.

Gian Maria.

Using PAT to authenticate your tools

One of the strength point of VSTS / TFS is the extensibility through API, and now that we have a really nice set of REST API, it is quite normal to write little tools that interacts with your VSTS / TFS instances.

Whenever you write tools that interact with VSTS / TFS you need to decide how to authenticate to the server. While for TFS is quite simple because you can simply run the tool with Active Directory user and use AD integration, in VSTS integrating with your AD requires more work and it is not always a feasible solution.

Actually the best alternative is to use Personal Access Tokens to access your server even if you are using TFS and you could use AD authentication.

PAT acts on behalf of a real user

You can generate Personal Access Token from security section of your user profile, and this gives you immediately idea that the token is related to a specific account.

image

Figure 1: Accessing security information for your profile

From Personal access tokens section of your profile you can generate tokens to access server on behalf of your user. This means that the token cannot have more rights that your user have. This is interesting because if you revoke access to a user, all PATs related to that user are automatically disabled, also, whatever restriction you assign to the user (ex deny access to some code path), it is inerently applied to the token.

PAT expires in time

You can see from point 1 of Figure 2 that the PAT has an expiration (maximum value is 1 year) and this imply that you have no risk of forgetting some tool authenticated somewhere during years.

This image shows how to create a PAT, and point out that the token expires, is bound to a specific account and you can restrict permission of the PAT to given area.

Figure 2: PAT Creation page in VSTS

A tipical security problem happens when you create in your TFS / VSTS a user to run tools, such as TFSTool or similar one. Then you use that user in every tool that need to do unattended access your TFS instance and after some years you have no idea how many tools are deployed that have access to your server.

Thanks to PAT you can create a different PAT for each tool that need to unattendely authenticate to your server, after one year maximum the tool will lose authentication and it need to have a new Token. This will automatically prevent  the risk of having old tools that after year still have access to your data even if they are not actively used anymore.

For VSTS (point 2) you should also specify the account that the PAT is able to access if your user have rights to access more than one account.

PAT Scope can be reduced

In Figure 2 the point 3 highlight that you can restrict permission of PAT based on TFS / VSTS area. If your tool need to manipulate work items and does not need to access code or other area of TFS, it is a best practice to create the token and give access only to Work Items. This means that, even if the user can read and write code, the token can have access only to Work Item.

Another really important aspect is that many areas have the option to specify access in read-only mode. As an example, if your tool needs only to access Work Items to create some reports, you can give PAT only Work Item (read) access, so the tool will be able to access only Work Item in read-only way.

The ability to reduce the surface of data that can be accessed by a PAT  is probably the number one reason to use PAT instead of  AD authentication for on-premise TFS.

PAT can be revoked

Any PAT can be revoked any time with a single click. This means that if you use the pattern of one PAT for each tool you can selectively revoke authentication to any tool revoking associated PAT. This capability is really interesting for on-premise TFS, because if you want to selectively revoke access to specific tool without PAT, you need to use a different user for each different tool and disable that specific user.

Conclusion

Using PAT is not only useful if you want to create token used by tools that need to do an unattended authentication to the server, but you can use PAT even for tools that you use, if you want to be sure that the tool will not have access to certain part of your account (you can use a PAT that can only access code to use with Git tools), or if the tool does not support MSA or AAD authentication.

How to add a user to Project Collection Service Account in TFS / VSO

VSO and TFS have a special group called: Project Collection Service Account that has really powerful permission, and usually no user should be part of that group. There are specific circumstances, like running TFS Integration platform to move code to TFS, where the account used to access VSO needs to be part of this group to temporary have special permission.

Sadly enough, the UI does not allow you to directly add a user to that group, because the add button is disabled if you select that group.

image 

Figure 1: You cannot add users or group to Project Collection Service Account Users directly from the ui.

The reason behind this decition is security, adding a user to this group is not part of everyday operation, users in that groups has really powerful permissions and you should add users to Service Accounts only in really specific situations and only when really required. This is the reason why you need to resort to Command Line.

tfssecurity 
	/g+ 
	"Project Collection Service Accounts" 
	alkampfer@nablasoft.com 
	/collection:https://gianmariaricci.visualstudio.com/DefaultCollection

TfsSecurity.Exe command line utility can add whatever users to whatever group, bypassing limitation in the UI. Remember than to remove the user from that group when he does not need anymore special permission; the commandline is the same as previous one just change /g+ to /g-

As a rule of  thumb, users should only be added to Service Account group only if strictly required, and removed from that group immediately after the specific need ceased to exist.

In older version of VSO / TFS you could obtain the same result without command line in the UI. You just selected the user you want to add to Service Group, then go to the member of section and then, pressing plus button, add the user to the group, but this is actually disabled in actual version.

image

Figure 2: You cannot add anymore a user directly to a group.

If you really want to avoid command line, you can still use the UI. Just create a standard TFS Group and then add the group to the Project Collection Service Accounts. First step: create a group with a Really Explicit Name.

image

Figure 3: This group has a specific name that immediately tells to the reader that it is a special group.

Once the group is created, you can simply add it to the Project Collection Service Account group with few click.

image

Figure 4: Add new group to the Project Collection Service Accounts group

Now you can simply add and remove users to the “WARN – Service Account Users” group from the UI when you need to grant or remove Service Account Permission.

Gian Maria Ricci.

Make easy storing secure password in TFS Build with DPAPI

I’ve blogged some days ago on Securing the password in build definition. I want to make a disclaimer on this subject. The technique described in that article permits you to use encrypted password in a build definition, but this password cannot be decrypted only if you have no access to the build machine. If you are a malicious user and you can schedule a build, you can simply schedule a new build that launch a custom script that decrypts the password and sends clear password by email or dump to the build output.

The previous technique is based on encrypting with DPAPI, encrypted password can be decrypted only by TfsBuild user and only in the machine used to generate the password (build machine). Despite the technique you used to encrypt the password, the build process should be able to decrypt the password, so it is possible for another user to schedule another build running a script that decrypt the password.

Every user that knows the TfsBuild user password can also remote desktop to build machine, or using Powershell Remoting to decrypt the password from the build server. This means: the technique described is not 100% secure and you should be aware of limitation.

Apart from these discussions on the real security of this technique, one of the drawbacks of using DPAPI is you need to do some PowerShell scripting in the remote machine to encrypt the password. So you need to remote Desktop build machine or you need to do a remote session with PowerShell. A better solution is creating a super simple asp.net Site that will encrypt the password with a simple HTML page, then deploy that site on the Build Server.

The purpose is having a simple page running on build server with credentials of TfsBuild that simply encrypt a password using  DPAPI

image

Figure 1: Simple page to encrypt a string.

You can test locally this technique simply running the site in localhost using the same credentials of logged user, encrypting a password and then try to decrypt in powershell.

image

Figure 2: Decrypting a password encrypted with the helper site should work correctly.

The code of this page is really stupid, here is the controller.

[HttpPost]
public ActionResult Index(String pwd)
{
    var pbytes = Protect(Encoding.Unicode.GetBytes(pwd));
    ViewBag.Encrypted = BitConverter.ToString(pbytes).Replace("-", "");
    return View();
}

public static byte[] Protect(byte[] data)
{
    try
    {
        // Encrypt the data using DataProtectionScope.CurrentUser. The result can be decrypted 
        //  only by the same current user. 
        return ProtectedData.Protect(data, null, DataProtectionScope.CurrentUser);
    }
    catch (CryptographicException e)
    {
        Console.WriteLine("Data was not encrypted. An error occurred.");
        Console.WriteLine(e.ToString());
        return null;
    }
}

And the related view.

@{
    ViewBag.Title = "Index";
}

<h2>Simple Powershell Encryptor utils</h2>
<form method="post">


    Insert your string <input type="password" name="pwd" />
    <br />
    <input type="submit" value="Encrypt" />
    <br />
    <textarea cols="80
              " rows="10" >@ViewBag.Encrypted</textarea>

</form>

Thanks to this simple site encrypting the password is much more simpler than directly using powershell and you do not need to remote desktop to build machine. To have a slightly better security you can disable remote desktop and remote powershell in the Build Machine so noone will be able to directly use PowerShell to decrypt the password, even if they know the password of TfsBuild user.

Related Articles

Gian Maria.