Sometimes, even if you are logged in as a domain user that has all the rights to access TFS, when you navigate to TFS you are prompted for password every time. You simply re-enter your credentials and you access TFS, but each time you close and reopen the browser you need to manually reenter credentials. This problem happens because the browser does not understand that the url of TFS belongs to Intranet Sites and it does not send AD credentials for authenticating. Before resorting to manually handle authentication with Credential Manager to each client computer, consider fixing this once for all with Group Policy.

If you look at internet security Settings in your Internet Explorer, the default settings is having automatic logon enabled only in Intranet Zone. If the url of your TFS is not recognized as belonging to Local Intranet, credentials are not sent and you will be prompted for password.

image

Figure 1: Intranet zone is allowed for automatic logon.

A simple solution is manually adding TFS url to the list of Intranet Site, but this is a manual operation that must be done for each computer. It is really better to propagate the list of url belonging to Intranet Site through Active Directory with Group Policies. This will permits you to specify all the urls that should be considered Intranet (and/or trusted) in a central place and have this setting propagate applying the policy to the right Organizational Unit or to entire domain.

image

Figure 2: Set list of intranet sites through AD Policies

The exact path of this setting is represented in Figure 3:

image

Figure 3: Path of the setting where you find the “Site to Zone Assignment List”

The drawback of this approach is that people are not able anymore to change the list of sites, because now it is managed by the policy overriding local settings. But the net effect is that now every computer that has this policy applied can access TFS without re-entering the password each time.

image

Figure 4: In computer belonging to the domain, the list of sites belonging to each area is now managed by AD Policy.

Another option is using Policy option “Turn on automatic Detection of Intraned”, that enables each computer to guess if a site belongs to the intranet. This setting usually works good and it is less invasive for the user, but if it does not work, specifying the exact list is the best option you have.

image

Figure 5: Automatic detection of zone in Intranet area

image

Figure 6: Automatic detection applied to a client computer.

Gian Maria.

Tags:

No comments

Some days ago I had some tweet exchange with Giulio about a post of Gordon on storing security info in TFS Build Definition. The question is: how can I store password in build definition without people being able to view them simply editing the build definition itself?

With TFS 2013 a nice new Build template that allow customization with scripts is included and this is my preferred build customization scenario. Now I question myself on How can I pass a password to a script in build definition in a secure way? When you are on Active Directory, the best solution is using AD authentication. My build server runs with credentials of user cyberpunk\\TfsBuild where cyberpunk is the name of my domain and the build is executed with that credentials. Any software that supports AD authentication can then give rights to TfsBuild users and there is no need to specify password in build definition.

As an example, if you want to use Web Deploy to deploy a site in a build you can avoid storing password in Clear Text simply using AD authentication. I’ve described this scenario in the post how to Deploy from a TFS Build to Web Site without specifying password in the build definition.

But sometimes you have services or tools that does not supports AD authentication. This is my scenario: I need to call some external service that needs username and password in querystring; credentials are validated against custom database. In this scenario AD authentication could not be used. I’ve setup a simple web service that ask for username and password, and returns a json that simply dumps parameters. This simple web service will represent my external service that needs to be invoked from a script during the build.

image

Figure 1: Simple service that needs username and password without supporting AD Authentication.

As you can see the call does nothings except returning username and password to verify if the script was really called with the rights parameters. Here is a simple script that calls this service, this script can be invoked during a TFS Build with easy and it is my preferred way to customize TFS 2013 build.

Param
(
[string] $url = "http://localhost:2098/MyService/Index",
[string] $username = "",
[string] $password = ""
)

Write-Host "Invoking-Service"

$retValue = Invoke-RestMethod $url"?username=$username&password=$password"  

Write-Host "ReturnValueIs: "$retValue.Message

Once I’ve cheked-in this script in source code, invoking it in TFS Build is a breeze, here is how I configured the build to invoke the service after source code is built.

image

Figure 2: Invoke script but password is in clear text.

This works perfectly, you can verify in the build Diagnostics that the web site was correctly called with the right username and password (Figure 3), but as you can see in Figure 2 password is in clear text, everyone that has access to the build now knows the password. This is something that could no be accepted in some organization, so I need to find a way to not specify password in clear text.

image

Figure 3: Web site was called with the right password.

My problem is: how can I pass a password to the script in a secure way?

Luckily enough, windows implements a set of secure API called DPAPI that allows you to encrypt and decrypt a password using user/machine specific data. This means that a string encrypted by a user on a machine can be decrypted only by that user on the same machine and not from other users.

Thanks to DPAPI we can encrypt the password using Cyberpunk\\TfsBuild user from build machine, then use encrypted password in build definition.

Anyone that looks at build definition will see the encrypted password, but he could not decrypt unless he knows credentials of Cyberpunk\\TfsBuild user and runs the script on the same Build machine.

Build agent can decrypt the password because it runs as Cyberpunk\\TfsBuild user on the Build machine.

Now I remote desktop on the Build Machine, opened a powershell console using credentials of Cyberpunk\TfsBuild user, then I encrypted the password with the following code. For this second example the password will be MyPassword to distinguish from previous example.

PS C:\Users\Administrator.CYBERPUNK> $secureString = ConvertTo-SecureString -String "MyPassword" -AsPlainText -Force
PS C:\Users\Administrator.CYBERPUNK> $encryptedSecureString = ConvertFrom-SecureString -SecureString $secureString
PS C:\Users\Administrator.CYBERPUNK> Write-Host $encryptedSecureString
01000000d08c9ddf0115d1118c7a00c04fc297eb010000007b3f6d7796acef42b98128ebced174280000000002000000000003660000c00000001000
0000dee3359600e9bfb9649e94f3cfe7b24f0000000004800000a000000010000000e12de6a220f9a542655d75356be128511800000012a173b8fe8b
09244f7050da6784289a308ce6888ace493614000000e3dcb31c16ac3ff994d50dac600ed766d746e901

Encrypted password is that long string you see in the above script and can be used in build definition instead of a clear-text password.

image

Figure 4: Password is now encrypted in the build definition

This password can be decrypted only by users that knows the password of TfsBuild user and can open a session in the Build machine. The main drawback of this technique is that the person that creates the build (and knows the password for the external service) should know also the password of TfsBuild user and access to Build machine to encrypt it. This problem will be fixed in a future post, for now I’m happy enough of not having clear text password in build definition.

Clearly the script that invokes the service should be modified to takes encryption into account:

Param
(
[string] $url = "http://localhost:2098/MyService/Index",
[string] $username = "",
[string] $password = ""
)

Write-Host $password
$secureStringRecreated = ConvertTo-SecureString -String $password

$cred = New-Object System.Management.Automation.PSCredential('UserName', $secureStringRecreated)
$plainText = $cred.GetNetworkCredential().Password

Write-Host "Invoking-Service"

$retValue = Invoke-RestMethod $url"?username=$username&password=$plainText"  

Write-Host "ReturnValueIs: "$retValue.Message

This code simply decrypts the password and then calls the service. This is a simple piece of powershell code I’ve found on some sites, nothing complex. Then I checked in this new script and fire the build. After the build completes I verified that the script correctly decrypted the right password and that the service was invoked with the right decrypted password.

image

Figure 5: Script correctly decrypted the password using TFSBuild credentials

To verify that this technique is secure I connected as Domain Administrator, edited the build and grabbed encrypted password from the definition. Once I’ve got the encrypted password I run the same PowerShell script to decrypt it, but I got an error.

image

Figure 6: I’m not able to decrypt the string once encrypted by a different user

Even if I’m a Domain Admin, I could not decrypt the password, because I’m a different user. It is not a matter of permission or of being or not ad administrator, the original password is encrypted with data that is available only for the same combination of user/machine, so it is secure.

If you have multiple build controllers / agent machines, you can still use this technique, but you need to specify the build machine you used to generate the password in the build definition.

image

Figure 7: I specified the exact agent that should run the build, because it is on the machine where I’ve encrypted the password.

In this example I’ve used powershell, but the very same technique can be used in a Custom Action because DPAPI is available even in C#.

Gian Maria.

Tags:

No comments

If you use good comments in your code, sometimes you need to search inside those comment to find a certain part of code associated to a specific comment that contains specific word. The sad part is that you can do it only for the latest version of the code and not for the entire history of all files. Suppose you want to do a simple Proof Of Concept to insert all content of all C# source code files in some search server (Es Solr, or Elastic Search) how it can be done with TFVC?

The answer is: with few lines of codes. First of all you need to connect to the Project collection you want to index and query for the whole history of your source code.

using (var projectCollection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(collectionUri))
{
    var versionControl = projectCollection.GetService < VersionControlServer>();

    ChangesetVersionSpec versionFrom = new ChangesetVersionSpec(1);
    VersionSpec versionTo = VersionSpec.Latest;

    var changesets = versionControl.QueryHistory(
        "$/",
        versionTo,
        0,
        RecursionType.Full,
        null,
        versionFrom,
        versionTo,
        Int32.MaxValue,
        true,
        false,
        true,
        true
        );

One you got the list of all the changesets of the collection you can start enumerating and storing data inside your Solr Server.

foreach (Changeset changeset in changesets)
{
    //Check the comment ... 
    //Console.WriteLine("Changeset ID:" + changeset.ChangesetId);
    //Console.WriteLine("Owner: " + changeset.Owner);
    //Console.WriteLine("Committed By: " + changeset.Committer);
    //Console.WriteLine("Date: " + changeset.CreationDate.ToString());
    //Console.WriteLine("Comment: " + changeset.Comment);
    //Console.WriteLine();
    logger.Info("Analyzing changeset " + changeset.ChangesetId);
    Int32 partcommit = 0;
    foreach (var change in changeset.Changes)
    {
        //Console.WriteLine("\tChanged: " + change.Item.ServerItem);
        //Console.WriteLine("\tOwChangener: " + change.ChangeType);
        if (change.Item.ItemType == ItemType.File) 
        {
            String tempFile = Path.Combine(Path.GetTempPath(), Path.GetFileName(change.Item.ServerItem));
            if (Path.GetExtension(tempFile) == ".cs") 
            {
                logger.Debug("Indexing: " + change.Item.ServerItem);
                String content = null;
                using (var reader = new StreamReader(change.Item.DownloadFile()))
                {
                    content = reader.ReadToEnd();
                }
                XElement elementNode;
                XDocument doc = new XDocument(
                    new XElement("add", elementNode = new XElement("doc")));

                elementNode.Add(new XElement("field", new XAttribute("name", "id"), change.Item.ServerItem + "_Cid" + changeset.ChangesetId));
                elementNode.Add(new XElement("field", new XAttribute("name", "changesetId"), changeset.ChangesetId));
                elementNode.Add(new XElement("field", new XAttribute("name", "author"), changeset.Owner));
                elementNode.Add(new XElement("field", new XAttribute("name", "path"), change.Item.ServerItem));
                elementNode.Add(new XElement("field", new XAttribute("name", "content"), content));

                solrServer.Post(doc.ToString());
            }
                            
        }
        if (partcommit++ % 100 == 0) 
        {
            solrServer.Commit();
        }
    }
}

This is not production-quality code, just a quick test to find how simple is downloading all files for each commits from your TFVC repository. The key part is enumerating the Changes collection of the changeset object, that contains the list of changes. If the change is of type File, I simply check if the file has .cs extension and if it is a csharp file I download that specific version in a local temp directory.

Thank to the change.Item.DownloadFile() method I do not need to create workspace and I can simply download only the file I need, and once the file is on a local folder, I use a simple custom class to index it into a Solr Server. This approach has pro and cons.

  • Pro: It is simple, few lines of codes, and you have data inside a Solr (or Elastic Search) server to be queried
  • Cons: it breaks security, you should now secure your Solr or ES server so people are not free to access it.

In real production scenario you need to

  • Change the code so it runs incrementally, just store last ChangesetId you indexed and restart from the next.
  • Put some webservice in front of your ES or Solr server, issue the search to the Solr or ES Server, and once it returns you the list of the files that matches query, you need to check if the actual user has permission to access those files in original TFS Server.

Gian Maria.

Tags:

No comments

TFS Rest Api are one of the most exiting new feature introduced in Visual Studio Online. One of the most important aspect of TFS is the ability to gather data about our team and having full access to that data for custom reporting is the primary need for most people. While you can query TFS Warehouse database for on-premise TFS to gather all the data you need, you have no access to databases for VSO In such scenario Rest APIs are the best way to interact to your account to quickly grab data to consume from your application.

To start experimenting with the API, one of the best approach is using some Rest Client (I use Advanced Rest Client for Chrome), login to your VSO account so the browser is authenticated and start issuing requests. As an example, suppose you want to create a chart of Bugs Counts subdivided by State. Sadly enough Work Item query Language does not supports grouping functions, but with Rest APIs you can get data you need with multiple queries. One of the coolest REST endpoint is the one that allows you to execute a Query in Work Item Query Language.

You can POST to this url


https://gianmariaricci.visualstudio.com/defaultcollection/_apis/wit/queryresults?api-version=1.0-preview

With a json payload of

{ "wiql": "Select [System.Id] FROM WorkItems where 
[System.TeamProject] = 'Experiments' AND
[System.WorkItemType] = 'Bug' AND
[System.State] = 'Proposed'" }

Do not forget to set Content-Type header to application/json, and you got a result similar to this one.

{
asOf: "2014-07-05T09:52:39.447Z"
query: {
type: "query"
columns: [1]
0:  "System.Id"
-
queryType: "flat"
sortOptions: [0]
wiql: "Select [System.Id] FROM WorkItems where [System.TeamProject] = 'Experiments' AND [System.WorkItemType] = 'Bug' AND [System.State] = 'Proposed'"
url: null
}-
results: [13]
0:  {
sourceId: 108
}-
1:  {
sourceId: 270

When you execute a query, the result is a series of Work Items Id, but if you need summary data for a chart, you can simply count the number of elements of the results array to obtain the number of bug in state proposed. In this example this number is 13. If You execute execute a separate query for each state you will end with all the data you need to create a simple chart of Bugs count divided by state. This is not a big result, because this type of graphs is available directly from your VSO account.

image

Figure 1: Count of Bugs grouped by state

But the most interesting aspect of using Work Item Query Language is the asOf operator. In the above result you can see that the result starts with

asOf: “2014-07-05T09:52:39.447Z”

This indicates that the results of the query was done at that specific instant of time, and the interesting part is that you can use the asOf operator to query WorkItem at different point in time. Es

{ "wiql": "Select [System.Id] FROM WorkItems where 
[System.TeamProject] = 'Experiments' AND
[System.WorkItemType] = 'Bug' AND
[System.State] = 'Proposed'
asof '2014-07-04T10:00:00.000Z'
 " }

As you can see I’ve added asOf and a timestamp at the end of the query. This instruct TFS to execute the query and returns me the result valid at that specific Timestamp, in fact I have different number returned.

{
asOf: "2014-07-04T10:00:00Z"
query: {
type: "query"
columns: [1]
0:  "System.Id"
-
queryType: "flat"
sortOptions: [0]
wiql: "Select [System.Id] FROM WorkItems where [System.TeamProject] = 'Experiments' AND [System.WorkItemType] = 'Bug' AND [System.State] = 'Proposed' asof '2014-07-04T10:00:00.000Z' "
url: null
}-
results: [16]
0:  {
sourceId: 108

Number of Bug in state “proposed” was 16 and not 13 at that different timestamp. If you issue multiple queries, you can also create a trend graph with easy.

Thanks to asOf operator and REST API, grabbing historical data from TFS to create custom charts or report could not be easier. My suggestion is creating a routine that grab the data you need and save in some local store. Run that routine each day to keep your local data aligned, then manipulate data with Excel PowerPivot or other similar tool to create the Charts you need.

Gian Maria.

Tags:

1 Comment

The problem

 

One of the most dreadful problem of Unit Testing is slow testing. If your whole suite of tests runs in 10 minutes, it is normal for developers not to run the whole suite at each build. One of the most common question is

How can I deal with slow Unit Tests?

Here is my actual scenario: in a project I’m working in, we have some multilingual full text search done in Elastic Search and we have a battery of Unit Tests that verify that searches work as expected. Since each test deletes all documents, insert a bunch of new documents and finally commits lucene index, execution times is high compared to the rest of tests. Each test need almost 2 seconds to run on my workstation, where I have really fast SSD and plenty of RAM.

This kind of tests cannot be run in memory or with some fancy trick to make then run quickly. Actually we have about 30 tests that executes in less than one seconds, and another 13 tests that runs in about 23 seconds, this is clearly unacceptable. After few hours of work, we already reached the point where running the whole suite becomes annoying.

The solution

 

This is a real common problem and it is quite simple to fix. First of all Visual Studio Test runner actually tells you execution time for each Unit Test, so you can immediately spot slow tests. When you identify slow tests you can mark them with a specific category, I use slowtest

    [TestFixture]
    [Category("elasticsearch")]
    [Category("slowtest")]
    public class EsSearcherFixture : BaseTestFixtureWithHelpers

Since I know in advance that this test are slow I immediately mark the entire class with the attribute slowtest. If you have no idea what of your tests are slow, I suggest grouping test by Duration in Visual Studio Test Runner.

image

Figure 1: Group tests by Duration

The result is interesting, because Visual Studio consider every test that needs more than one second to be slow. I tend to agree with this distinction.

image

Figure 2: Test are now grouped by duration

This permits you to immediately spot slow tests, so you can add the category slowtest to them. If you keep your Unit Tests organized and with a good usage of categories, you can simply ask VS Test Runner to exclude slow test with filter –Traits:”slowtest”

image

Figure 3: Thanks to filtering I can now execute continuously only test that are not slow.

I suggest you to do a periodic check to verify that every developers is using the slowtest category wisely, just group by duration, filters out the slowtest and you should not have no tests that are marked slow.

image

Figure 4: Removing the slowtest category and grouping by duration should list no slow test.

The nice part is that I’m using NUnit, because Visual Studio Test Runner supports many Unit Tests Frameworks thanks to the concepts of Test Adapters.

If you keep your tests well organized you will gain maximum benefit from them :).

Gian Maria.

Tags: ,

No comments