Use the right Azure Service Endpoint in build vNext

Build vNext has a task dedicated to uploading files in azure blob, as you can see from Figure 1:

Sample build vNext that has an Azure File Copy task configured.

Figure 1: Azure File Copy task configured in a vNext build

The nice parte is the Azure Subscription setting, that allows to choose one of the Azure endpoint configured for the project. Using service endpoint, you can ask to the person that has password/keys for Azure Account to configure an endpoint. Once it is configured it can be used by team members with sufficient right to access it, without requiring them to know password or token or whatever else.

Thanks to Services Endpoints you can allow member of the team to create builds that can interact with Azure Accounts without giving them any password or token

If you look around you find a nice blog post that explain how to connect your VSTS account using a service principal.

SAmple of configuration of an Endpoint for Azure with Service Principal

Figure 2: Configure a service endpoint for Azure with Service Principal Authentication

Another really interesting aspect of Service Endpoints, is the ability to choose people that can administer the account and people that can use the endpoint, thus giving you full security on who can do what.

Each Service endpoint has its security setting to specify people that can administer or read the endpoint

Figure 3: You can manage security for each Service Endpoint configured

Finally, using Service Endpoint you have a centralized way to manage accessing your Azure Subscription Resources, if for some reason a subscription should be removed and not used anymore, you can simply remove the endpoint. This is a better approach than having data and password or token scattered all over the VSTS account (builds, etc).

I’ve followed all the steps in the article to connect your VSTS account using a service principal, but when it is time to execute the Azure File Copy action, I got a strange error.

Executing the powershell script: C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\agents\default\tasks\AzureFileCopy\1.0.25\AzureFileCopy.ps1
Looking for Azure PowerShell module at C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\Azure.psd1
AzurePSCmdletsVersion= 0.9.8.1
Get-ServiceEndpoint -Name 75a5dd41-27eb-493a-a4fb-xxxxxxxxxxxx -Context Microsoft.TeamFoundation.DistributedTask.Agent.Worker.Common.TaskContext
tenantId= ********
azureSubscriptionId= xxxxxxxx-xxxxxxx-xxxx-xxxx-xxxxxxxxxx
azureSubscriptionName= MSDN Principal
Add-AzureAccount -ServicePrincipal -Tenant $tenantId -Credential $psCredential
There is no subscription associated with account ********.
Select-AzureSubscription -SubscriptionId xxxxxxxx-xxxxxxx-xxxx-xxxx-xxxxxxxxxx
The subscription id xxxxxxxx-xxxxxxx-xxxx-xxxx-xxxxxxxxxx doesn't exist.
Parameter name: id
The Switch-AzureMode cmdlet is deprecated and will be removed in a future release.
The Switch-AzureMode cmdlet is deprecated and will be removed in a future release.
Storage account: portalvhdsxxxxxxxxxxxxxxxxx1 not found. Please specify existing storage account

This error is really strange because the first error line told me:

The subscription id xxxxxx-xxxxxx-xxxxxx-xxxxxxxxxxxxx doesn’t exist.

This cannot be the real error, because I’m really sure that my Azure Subscription is active and it is working everywere else. Thanks to the help of Roopesh Nair, I was able to find my mistake. It turns out that the Storage Account I’m trying to access is an old one created with Azure Classic Mode, and it is not accessible with Service Principal. A Service Endpoint using Service Principal can manage only Azure Resource Manager based entities.

Shame on me :) because I was aware of this limitation, but for some reason I completely forgot it this time.

Another sign of the problem is the error line telling me: Storage account xxxxxxxxx not found, that should ring a warning bell about the script not being able to find that specific resource, because it is created with classic mode.

The solution is simple, I could use a Blob Storage created with Azure Resource Manager, or I can configure another Service Endpoint, this time based on a management certificate. The second option is preferrable, because having two Service Endpoint, one configured with Service Principal and the other configured with Certificate allows me to manage all type of Azure Resources.

Configure an endpoint with certificate is really simple, you should only copy data from the management certificate inside the Endpoint Configuration and you are ready to go.

Configuration of an endpoint based on Certificate

Figure 4: Configure an Endpoint based on Certificate

Now my build task Azure File Copy works as expected and I can choose the right Service Endpoint based on what type of resource I should access (Classic or ARM)

Gian Maria

Where is my Azure VM using Azure CLI?

Azure command line interface, known as Azure CLI is a set of open source, cross platform commands to manage your Azure resources. The most interesting aspect of these tools is that they are Cross Platform and you can use them even on Linux boxex.

After you imported your certificate key to manage your azure account you can issue a simple command

azure vm list

To simply list all of your VM of your account.

image

Figure 1: Output of azure vm list command

If you wonder why not of all your VMs are listed, the reason is, Azure CLI default to command mode asm, so you are not able to manage resources created with the new Resource Manager. To learn more I suggest you to read this couple of articles.

Use Azure CLI with Azure Resource Manager
Use Azure CLI with Azure Service Management

If you want to use new Resource Manager you should switch with the command:

azure config mode arm

But now if you issue the vm list command, probably you will get an error telling you that you miss authentication. Unfortunately you cannot use certificate to manage your account (as you can do  with Azure PowerShell or Azure CLI in config mode asm). To authenticate with Azure Resource Manager mode you should use the command

azure login

But you need to use an account created in your Azure Directory, and not your primary Microsoft Account (at least mine does not work). This article will guide you on creating an user that can be used to manage your account. Basically you should go in old portal, get to the Active Directory Page and create a new account. Then from the global setting pane you should add that user to the Subscription Administrator group. Please use a really strong password, because that user can do everything with your account.

When you login correctly into Azure from CLI, you can use the same command to list VMs, but now you will see all VMs that are created with the new Resource Manager.

image 

Figure 2: In arm mode you are able to list VM created with the new Resource Manager.

This happens also in standard Azure Portal GUI, because you have two distinct node for Virtual Machine, depending if they are created with Azure Service Management or Azure Resource Manager.

image

Figure 3: Even in the portal you should choose which category of VM you want to manage

Gian Maria

Where is my DNS name for Azure VM with new Resource manager?

Azure is changing management mode for resources, as you can read from this article and this is the reason why, in the new portal, you can see two different entry for some of the resources, ex: Virtual Machines.

image

Figure 1: Classic and new resource management in action

Since the new model has more control over resources, I’ve create a linux machine with the new model to do some testing. After the machine was created I opened the blade for the machine (machine create with the new model are visible only on the new portal) and I noticed that I have no DNS name setting.

image

Figure 2: Summary of my new virtual machine, computer name is not a valid DNS address

Compare Figure 2 with Figure 3, that represents the summary of a VM created with the old resource management. As you can see the computer name is a valid address in domain cloudapp.net

image

Figure 3: Summary of VM created with old resource management, it has a valid DNS name.

Since these are test VM that are off most of the time, the IP is chaning at each reboot, and I really want a stable friendly name to store my connection in Putty/MremoteNG.

From the home page of the portal, you should see the resource manager group where the machine was created into. If you open it, you can see all the resources that belongs to that group. In the list you should see your virtual machine as well as the IP Address resource (point 2 in Figure 4) that can be configured to have a DNS name label. The name is optional, so it is not automatically setup for you during machine creation, but you can specify it later.

SNAGHTML280a6d

Figure 4: Managing ip addresses of network attached to the Virtual Machine

Now I set the name of my machine to docker.westeurope.cloudapp.azure.com to have a friendly DNS for my connection.

Enjoy.

How to connect existing VSO Account to new azure portal

With the new deploy of Visual Studio Online you can link your existing VSO accounts to your azure subscription so they will be available on new Azure portal http://portal.azure.com. You just need to connect to standard management portal (http://manage.windowsazure.com) and then add an existing VSO account to the list of available ones.

image

Figure 1: Use the New button in azure portal to add existing VSO account to new azure portal.

Your account is now connected to your azure account, now you should connect the account to your Azure Directory. This is not an automatic operation, you need to go to account details page and then ask to connect the account to your directory. After some time your account should be connected to Azure Directory

image_thumb8

Figure 2: Your VSO account is now connected to My Default Directory

You should be now able to view your account in new portal http://portal.azure.com

image

Figure 3: My existing VSO account is now available in the new azure portal.

This is an important change for your account, because now all the users are taken from the Default Directory and existing users are not able to access anymore your service until you do not add them to your directory. This step is needed also to be able to connect to VSO with your corporate credentials, if your Azure Directory is synchronized with your Active Directory.

I strongy suggest you to read this fantastic post by Mitch Denny and also some interesting link to better understand how this works.

Gian Maria

Install and configure a TFS Release Manager Deployer Agent in Azure VM

The Problem

 

You have a domain with TFS and -release management, there are no problems deploying agents on machines inside the domain, but you are not able to configure an agent for machines outside the domain.

Es: you have some Azure VMs you want to use for your release pipeline and you do not want to join them to the domain with VPN or other mechanism.

This scenario usually ends in being not able to configure Deployment Agents in those machines due to various authorization problems. The exact symptom range from getting 401 errors when you try to configure Agent on the VM. Another symptom is being able to configure the Deployment Agent, but whenever the service starts you do not see any heartbeat on the server and in the Event viewer of VM you got error like these ones

Timestamp: 6/10/2014 6:17:12 AM
Message: Error loading profile for current user: nabla-dep1\inreleasedeployer
Category: General
Priority: -1
EventId: 0
Severity: Error
Title:
Machine: NABLA-DEP1

The usual cause

 

The most common cause of the problem is a bad configuration of Shadow Accounts and authentication problem between the machine outside the domain and the machines inside the domain. I want to share with you the sequence of operation that finally made my Azure VM runs a Deployer Agent and connect to Release Management Server.

Here is my scenario

My domain is called CYBERPUNK and the user used to run deployment agents is called InReleaseDeployer.

The machine where TFS and RM Server is installed is called TFS2013PREVIEWO and Azure VM is called NABLA-DEP1.

I’ve already the user CYBERPUNK\InReleaseDeployer added as a Service user to Release Manager, and I already used to deploy an agent for a machine in the domain with no problem.

Now head for configuring everything for an Azure VM.

The solution

 

This is the solution:

You need THREE ACCOUNTS:

  1. CYBERPUNK\InReleaseDeployer: You already have this because is the standard domain account used for Domain Agent in machine joined to your domain.

  2. TFS2013PREVIEWO\InReleaseDeployer: this is the shadow account on RM serve machine. It is a local user that should be created in the machine running Release Management server.
  3. NABLA-DEP1\InReleaseDeployer: this is the shadow account on the Azure VM Machine, is a local user on the Azure VM Machine

Now be sure that these account satisfy these conditions:

  1. All three accounts must have same password
  2. NABLA-DEP1\InReleaseDeployer must be administrator of NABLA-DEP1 machine, it must also have the right o logon as a service.

  3. All three account should be added to Release Management Users with these permissions.

Domain user should have the standard Service User flagged

imageù

Then the shadow account in the RM machine should be also Release Manager and not only Service User, here is the setting.

image

Please note that the user is expressed in the form MACHINENAME\username, so it is TFS2013PREVIEWO\InReleaseDeployer, and it is a Release Manager and a Service User.

Finally you need to add the user of the Azure VM.

image

Even for this user you need to express it in the form MACHINENAME\UserName, so it is NABLA-DEP1\InReleaseDeployer. This completes the setup of the shadow account for your RM Server.

Now it is the turn of the Azure VM so connect in Remote Desktop on Azure VM, and login with credentials NABLA-DEP1\InReleaseDeployer user, do not use other users and finally configure the agent. Before configuring the agent, open credential manager and specify credentials to connect to the public address of RM Server. You need to add a Windows Credential specifying the Credentials that should be used upon connection to the Remote Release Management Server. Be sure to prefix the user with domain specification, as in the following picture (CYBERPUNK\InReleaseDeployer).

image

You should now see the credentials you just entered.

image

Actually adding Credentials in Windows Credentials is required only if you want to use RM Client to connect to the server, but I noticed that my user had problem connecting to the server if I miss this part, so I strongly suggest you to add RM Server to Windows Credentials to avoid problem.

Now the last step is configuring the agent. You must specify Nabla-dep1\InReleaseDeployer as the user used to run the service and the public address of your Release Management Server.

image

Press Apply settings and the configuration should complete with no error.

image

Once the Deployer Agent is configured you should be able to find the new agent from Release Management Client, in Configure Path –> Servers –> New –> Scan For New .

image

Everything is ok, my RM Server is able to see the deployer on the VM even if the VM is outside the network and not joined to corporate domain. Now you can select the new agent and Press Register to register with the list of valid Deployer Agents.

image

Gian Maria.