Using vmWare machine when you have Hyper-V

There are lots of VM containing Demo, Labs etc around the internet and surely Hyper-V is not the primary target as virtualization system. This because it is present on desktop OS only from Windows 8, it is not free (present in windows professional) and bound to windows. If you have to create a VM to share in internet, 99% of the time you want to target vmWare or Virtual Box and a linux guest system (no license needed). Since Virtual Box can run vmWare machine with little problem, vmWare is de-facto the standard in this area.

Virtual Machines with demo, labs etc that you find in the internet are 99% targeted to vmWare platform.

In the past I’ve struggled a lot with conversion tools that can convert vmWare disk formats to Hyper-V format, but sometimes this does not work because virtualized hardware is really different from the two systems.

If you really want to be productive, the only solution I’ve found is installing an ESXi server on an old machine, an approach that gives me lots of satisfaction. First of all you can use the Standalone conversion tool of vmware to convert a vmWare VM to OVF standard format in few minutes, then upload the image to your ESXi server and you are ready to go.

image

Figure 1: A simple command line instruction convert VM into OVF format

image

Figure 2: From the esxi interface you can choose to create a new VM from OVF file

Once you choose the ofv file and the disk file you just need to specify some basic characteristics for the VM and then you can simply let the browser do the rest, your machine will be created into your ESXi node.

image

Figure 3: Your VM will be created directly from your browser.

The second advantage of esxi is that it is a real mature and powerful virtualization system available for Freee. The only drawback is that it needs a serious Network Card, it will not work with a crappy card integrated into a consumer Motherboard. For my ESXi test instance I’ve used my old i7-2600K with a standard P8P67 Asus motherboard (overclocked) and then I’ve spent a few bucks (50€ approx) to buy a used network card 4xGigabit. This gives me four independent NICs, with a decent network chip, each one running at 1Gbit. Used card are really cheap, especially because there are no driver for latest operating system so they are thrown away on eBay for few bucks. When you are using a Virtual Machine to test something that involves networks, you will thanks ESXi and decent multiple NIC card because you can create real network topology, like having 3 machines each one using a different NIC and potentially connected to different router / switch to test a real production scenario.

ESXi NIC virtualization is FAR more powerful than Virtual Box or even vmWare Workstation when installed with a real powerful NIC. Combined with multiple NIC card you have the ability to simulate real network topologies.

If you are using Linux machine, vmWare environment has another great advantage over Hyper-V, it supports all resolutions, you are not limited to Full-Hd with manual editing of grub configuration, you can change your resolution from Linux control panel or directly enable live resizing with the Remote Console available in ESXi.

If you really want to create a test lab, especially if you want to do security testing, having one or more ESXi hosts is something that pays a lot in the long distance.

Gian Maria

The Dreadful IIS Loopback Check

This is something that from times to times bites me, both as TFS Consultant and when I’m developing code. The problem is the following: you have a site hosted with IIS in the computer you are logged in, the site has windows authentication, but you cannot login using a FQDN, but only with localhost.

This is a Security Feature, because it avoid a reflection attack if the machine gets compromised. Sometimes this is annoying when you develop, because you are usually using your IIS machine to host site while you are developing, accessing it with localhost; then it is necessary to verify that everything works with real site names. For this reason I usually modify my hosts file to create alias like www.myproduct.local that points to 127.0.0.1 and here comes the problems.

If you use Forms authentication in ASP.NET you are ready to go, but if you enable windows authentication, the  symptom is that your browser continue to ask for password, because you will get a permanent 401 response.

A typical symptom of Loopback Check is when your site do not accept windows authentication when accessed with a FQDN, but works perfectly using localhost

If you legitimate want that www.myproduct.local points to localhost, and you want to use your NTLM/Kerberos credentials, you can follow the instruction on this link. I really like the answer in that link because I’ve found many other place that suggests to disable the Loopback Check entirely (Wrong choice from security points of view). In that link you are pointed to the right solution: specifying only the FQDN names that you want to exclude from the loopback check. In my situation I can disable www.myproduct.local while maintaining the security check for everything else.

If you have problem accessing TFS instance from the server where the Application tier is installed  do not disable Loopback Check, browse from another computer or disable check for only the real FQDN name.

Pretty please, resist the urge to disable security features, especially if this is your Team Foundation Server production instance. Avoid accessing the web interface from the AT, or disable Loopback check only for the real FQDN, but avoid turn off entirely security feature (like Loopback Check) on your production server.

Gian Maria.

Load Security Alerts in Azure Application Insight

Security is one of the big problem of the modern web, business moved to web application and security become an important part of application development. One side of the problem is adding standard and proved procedure to avoid risk of basic attacks like SQL or NO SQL injection, but big part of security was programmed at application level.

If you are using SPA with some client framework like Angular and have business logic exposed with API, (Ex ASP NET Web API), you cannot trust the client, thus you need to validate every server call for authorization and authentication.

When API layer is exposed to a web application it is one of the preferred attack surfaces for your application. Never ever trust your UI logic to protect from malicious calls

One of the most typical attack is forging calls to your API layer, a task that can be automated with tools like BURP Suite or WAPT and it is a major source of problem. As an example, if you have some administrative page where you grant or deny claim to users, it is imperative that every call should be accepted only if the authenticated user is an administrator of the system. Application logic can become complicated, as an example you can have access to a document because it is linked to an offer made in France and you are the Area Manager for France. The more security logic is complex the more you should test it, but you probably cannot cover 100% of the situations.

In such scenario it is imperative that every unauthorized access is logged and visible to administrators, because if you see that there is a spike in forged requests you should immediately investigate, because probably your system is under attack.

Proactive security is a better approach, you should immediately be warned if something suspicious is happening in your application.

Once you determine that there was a problem in your app you can raise a SecurityAlertLog but then the question is: Where this log should be stored? How can I visualize all SecurityAlerts generated by the system? One of the possible solution is using Azure Application Insights to collect all alerts and use it to monitor trend and status of security logs.

As an example, in our web application, when the application logic determines that the request has security problem (ex, user trying to access resource he as no access to), a typical HTTP Response Code 403 (Forbidden) is returned. Angular application usually prevent such invalid calls, so whenever we found a 403 it could be a UI Bug (Angular application incorrectly request a resource current user has no access to) or it could be some malicious tentative of accessing a resource. Both of the situation is quite problematic from a security perspective, so we want to be able to log them separately from application log.

Security problems should not be simply logged with the standard log infrastructure, but should generate some more visible and centralized alert.

With Web API a possible solution is creating an ActionFilterAttribute that is capable of intercepting every Web API Call, it depends on a list of ISecurityAlertLogService, a custom interface that represent some component capable of logging a security alert. With such configuration we can let our Inversion of Control mechanism to scan all implementation of ISecurityAlertLog service available to the system. As an example we log security Exception in a specialized collection in Mongo Database and on Azure Application Insights.

image

Figure 1: Declaration of a Web API Filter capable of intercepting every call and send security logs to a list of providers.

An Action Filter Attribut is a simple class that is capable of inspecting every WebApi call. In this scenario I’m interested in inspecting what happened when the request ends

image

Figure 2: Filter is capable of inspecting each call upon completion and can verify if the return value is 403,

Thanks to the filter we can simply verify if the return code is 403 or if the call generates an exception and that exception is a SecurityException; if the check is positive someone is tampering with requests or the UI has a security bug, a SecurityLogAlert is generated and it is passed to every ISecurityAlertLogService

Thanks to Azure Application Insoghts we can track the alert with very few lines of code.

image

Figure 3: Traking to AI is really a few lines of codes.

We are actually issuing two distinct telemetry events, the first one is a custom event, it contains very few data: most important is name, equal to string SecurityProblem concatenated with property type of our AlertMessage. That property contains the area of the problem, like ApiCall or CommandCall, etc. We also add three custom properties. Of the three “Type” property is important because it helps us to separate SecurityAlertMessages events from all other events that were generated by the system; user and ip address can be used to further filter security alerts. The reason why we are using very few information is that an Event is a telemetry object that cannot contain much data, it is meant to simply track something that happened in the system.

Then we add a Trace telemetry event, because a trace can contain much more information; basically we are tracing the same information of the Event, but the message is the serialization of the entire AlertMessage object, that can contain lots of information thus is not suited for a Custom event (if the custom event is too big it will be discarded)

With this technique we are sure that no SecurityAlert Custom Event will be discarded due to length (we are tracing minimum information) but we have also full information with TrackTrace. If a trace is too big to be logged we will miss it, but we will never miss an event.

image

Figure 4: Telemetry information as seen from the Visual Studio Application Insight Query window.

As you can see, I can filter for Type : SecurityAlertMessage to inspect only events and tracing related to security, I have my events and I can immediately see the user that generates the events.

Centralizing the log of multiple installation is a key point in security, the more log location source you need to monitor, the more is the probability that you miss some important information.

An interesting aspect is that when the software initialize TelemetryClient it adds the name of the customer/installation, so we can have a single point where all the logs of every customer is saved. From the telemetry we can filter for customer or immediately understand where the log was generated.

image

Figure 5: Customer properties allow us to send all telemetry events to a single Application Insight instance from multiple installation.

With a few lines of code we now have a centralized place where we can check security alert of our installations.

Gian Maria Ricci

How to security expose my test TFS on the internet

I’m not a security expert, but I have a basic knowledge on the argument, so when it is time to expose my test TFS on the outside world I took some precautions. First of all this is a test TFS instance that is running in my test network, it is not a production instance and I need to access it only sometimes when I’m outside my network.

Instead of mapping 8080 port on my firewall I’ve deployed a Linux machine, enabled SSH and added google two factor authentication, then I expose port 22 on another external port. Thanks to this, the only port that is exposed on my router is a port that remap on port 22 on my linux instance.

Now when I’m on external network, I use putty to connect in SSH to that machine, and I setup tunneling as for Figure 1.

image

Figure 1: Tunneling to access my TFS machine

Tunneling allows me to remap the 8080 port of the 10.0.0.116 machine (my local tfs) on my local 8080 port. Now from a machine external on my network I can login to that linux machine.

image

Figure 2: Login screen with verification code.

This is on a raspberry linux pi, I simply use pi as username, then use verification code from my cellphone (google authenticator app) and finally the password of my account.

Once I’m connected to the raspberry machine I can simply browse http://localhost:8080 and everything is redirected through a secure SSH tunnel to the 10.0.0.116 machine. Et voilà I can access any machine, any port in my network just using SSH tunneling.

image

Figure 3: My local TFS instance now accessible from external machine

This is surely not a tutorial on how to expose a production TFS instance (please use https), but instead is a simple tutorial on how you can access every machine in your local lab, without the need to expose directly the port on your home router. If you are a security expert you will probably find flaws in this approach, but surely it is better than directly map ports on the router.

Gian Maria.

Using PAT to authenticate your tools

One of the strength point of VSTS / TFS is the extensibility through API, and now that we have a really nice set of REST API, it is quite normal to write little tools that interacts with your VSTS / TFS instances.

Whenever you write tools that interact with VSTS / TFS you need to decide how to authenticate to the server. While for TFS is quite simple because you can simply run the tool with Active Directory user and use AD integration, in VSTS integrating with your AD requires more work and it is not always a feasible solution.

Actually the best alternative is to use Personal Access Tokens to access your server even if you are using TFS and you could use AD authentication.

PAT acts on behalf of a real user

You can generate Personal Access Token from security section of your user profile, and this gives you immediately idea that the token is related to a specific account.

image

Figure 1: Accessing security information for your profile

From Personal access tokens section of your profile you can generate tokens to access server on behalf of your user. This means that the token cannot have more rights that your user have. This is interesting because if you revoke access to a user, all PATs related to that user are automatically disabled, also, whatever restriction you assign to the user (ex deny access to some code path), it is inerently applied to the token.

PAT expires in time

You can see from point 1 of Figure 2 that the PAT has an expiration (maximum value is 1 year) and this imply that you have no risk of forgetting some tool authenticated somewhere during years.

This image shows how to create a PAT, and point out that the token expires, is bound to a specific account and you can restrict permission of the PAT to given area.

Figure 2: PAT Creation page in VSTS

A tipical security problem happens when you create in your TFS / VSTS a user to run tools, such as TFSTool or similar one. Then you use that user in every tool that need to do unattended access your TFS instance and after some years you have no idea how many tools are deployed that have access to your server.

Thanks to PAT you can create a different PAT for each tool that need to unattendely authenticate to your server, after one year maximum the tool will lose authentication and it need to have a new Token. This will automatically prevent  the risk of having old tools that after year still have access to your data even if they are not actively used anymore.

For VSTS (point 2) you should also specify the account that the PAT is able to access if your user have rights to access more than one account.

PAT Scope can be reduced

In Figure 2 the point 3 highlight that you can restrict permission of PAT based on TFS / VSTS area. If your tool need to manipulate work items and does not need to access code or other area of TFS, it is a best practice to create the token and give access only to Work Items. This means that, even if the user can read and write code, the token can have access only to Work Item.

Another really important aspect is that many areas have the option to specify access in read-only mode. As an example, if your tool needs only to access Work Items to create some reports, you can give PAT only Work Item (read) access, so the tool will be able to access only Work Item in read-only way.

The ability to reduce the surface of data that can be accessed by a PAT  is probably the number one reason to use PAT instead of  AD authentication for on-premise TFS.

PAT can be revoked

Any PAT can be revoked any time with a single click. This means that if you use the pattern of one PAT for each tool you can selectively revoke authentication to any tool revoking associated PAT. This capability is really interesting for on-premise TFS, because if you want to selectively revoke access to specific tool without PAT, you need to use a different user for each different tool and disable that specific user.

Conclusion

Using PAT is not only useful if you want to create token used by tools that need to do an unattended authentication to the server, but you can use PAT even for tools that you use, if you want to be sure that the tool will not have access to certain part of your account (you can use a PAT that can only access code to use with Git tools), or if the tool does not support MSA or AAD authentication.