Friday, September 20, 2013

Set Network ACLs using Windows Azure Powershell Commands

In the latest update of Azure PowerShell commandlets, there is an option to set network ACLS for VM end points. Using this option, you can


  • Allow/block access to an endpoint based on the IP address range
  • Maximum of 50 ACL rules are possible per VM
  • Lower numbered rules take precedence over higher number rules
  • If you create a permit ACL, all other IP ranges are blocked.
  • Similarly, if you define a Deny rule, All other Ips are permitted 
  • If no ACLs are defined, it is permit all by default
Steps for setting a permit ACL for a particular IP is given below. Before executing the same, make sure that you have set the subscriptions correctly as per my previous post.
  • Create a new acl object
$acl=New-AzureAclConfig
  • Create the permit rule and add it to the acl
Set-AzureAclConfig -AddRule -ACL $acl -Order 50 -Action Permit -RemoteSubnet "110.124.37.30/32" -Description "Test-ACL confguration"

Here I am explicitly permitting access from a public IP

  • Now we need to apply this rule to the VM endpoint. Inorder to get the available endpoints in the VM, you can use the following command
get-azureVM -ServiceName testvm1 -Name testvm1 |Get-AzureEndpoint

Then you need to set ACL for the required endpoint. In this example, I am going to set an ACL for the RDP endpoint of my test VM

Get-AzureVM -Servicename rmtestmis2 -Name testvm1 | Set-AzureEndpoint -Name 'Remote Desktop' -Protocol tcp -LocalPort 3389 -PublicPort 3389 -ACL $acl | Update-AzureVM

  • Once the task is completed successfully, we will test the acl status using the following comand
$endpoint = Get-AzureVM -ServiceName testvm1 -Name testvm1 |Get-AzureEndpoint -Name 'Remote Desktop'
$endpoint.acl







Back to basics : Networking - Part 2

IPV6 Basics:

  • IPV6 uses 32 bit address space whereas IPV6 uses 128 bit address space
  • Represented by eight groups of hexadecimal quadrants and uses Classless Interdomain Routing(CIDR)
  • First 48 bits of the address are the network prefix, next 16 characters are subnet ID and last 64 characters are interface identifiers
  • There are three kinds of IPV6 addresses  are Unicast,Multicast and Anycast
  • Unicast: Identifies a single interface, equalent to IPV4 address of a machine
  • Multicast: Identifier for Multiple network interfaces. Commonly used for sending signals to a given group of systems or for braodcasting videos to multiple computers etc
  • Anycast: The pacaket is delivered to the nearest(in terms of routing) device
  • IPV6 does not have broadcast messages
  • Unicast and Anycast addresses have the following scopes:
  • Link-local: Scope is local link(ie nodes on same subnet).Prefix for link-local addresses is FE80::/64
  • Site-Local:Scope is organization ie private site addressing.Prefix is FECO::/48
  • Global: Used for IPV6 internet addresses, which are globally routable
Difference between TCP and UDP:

  • TCP is connection oriented protocol, Data will be delivered even if the connection is lost, because the server will requiest the lost part. Also there will not be any corruption while  transferring a message. Whereas UDP is a connection less protocol, in the sense that you send it and forget it. There is no guarentee of corruption free transmission
  • TCP:If they messages are send one after the other, the message that is sent first will reach first. In case of UDP, you cannot be sure of the order in  which the data arrives
  • TCP: Data is sent as as a stream with nothing distinguising where the packet starts or ends. UDP(dats is sent as datagrams and will be whole when they reach
  • TCP examples: world wide web.SMTP.FTP,SSH
  • UDP examples: DNS,VOIP,TFTP etc
Spanning tree protocol: Ensures that there are no loops while creating redundant paths in your network


One switch is selected as root switch, which take decisions such as which port to put in forwarding mode and which port in blocking mode etc is taken by this switch

Command to set root switch for a vlan: 
set spantree root vlan_id 


Managing Windows Azure using Powershell commandlets

Inorder to start managing your Azure subscriptions using Powershell commandlets, first you need to install the Windows Powershell from here


  • Open the Azure PowerShell windows from Start-> all programs->Windows Azure->Windows Azure Powershell
  • Inorder to manage a subscription, you will have to import the management certificate for the same . You can use the below commands for the same


$cert = new-object System.Security.Cryptography.X509Certificates.X509Certificate2
$Filepath ="D:\certs\managementcert.pfx" --> Provide the path to your management cert here
$password='Password' --> Give your certificate password here
$cert.Import($Filepath,$password,'Exportable,PersistKeySet')  -->At this point the variable $cert will have your management certificate loaded


  • Now you need to import your subscription id & subscription name. You can get the value from the management portal->Settings
$subscriptionId = '1935b212-1179-4231-a4e6-g7614be788s4'
$subscriptionName = 'YOUR_SUBSCRIPTION_NAME'

  • Next you need to set the Azure subscription 
Set-AzureSubscription -SubscriptionName $subscriptionName -SubscriptionId  $subscriptionID -Certificate $cert

 Now you can start executing the azure commandlets against the resources in your subscription.

Complete reference of Azure Powershell commandlets can be found here: 
http://msdn.microsoft.com/en-us/library/windowsazure/dn408531.aspx




Wednesday, September 18, 2013

Windows Azure fault domain and upgrade domain

Fault Domain: In simple words, fault domain can be considered as a single point of failure. For eg:, servers hosted in a rack in a data center can be considered as a fault domain, because power failure to the rack will bring down all the servers in it. During deployment time, the instances in a role are assigned to different fault domains, to provide fault tolerance (only when there are multiple fault domains)

Upgrade Domain: This concept is applicable during a deployment upgrade.Each upgrade domain can be considered as a logical unit of deployment. During an application upgrade, it is carried out on a per upgrade domain basis, ie the instances in the first upgrade domain are stopped, upgraded  , brought back to service, followed by the the second upgrade domain. Thsi ensures that the application is accessible during the upgrade process though with reduced capacity

Windows Azure storage concepts

You can create a storage accounts in windows Azure and provide your applications access to the tables, Blobs and queues in it.

  • The maximum capacity of data for storage accounts is 200TB, if it was created after June 8th 2012 and 100 TB if created before that.
  • Geo redundant Storage(GRS): Replicates the storage to a secondary, geographically separate location. Data is replicated asynchronously to the secondary location in the background. If there is any failure in primary location, storage will failover to the secondary location
  • Locally redundant Storage(LRS) : For any storage, the data is replicated three times within the same datacentre. All Windows Azure storages are locally redundant
  • Affinity group: It is a geographical grouping of cloud deployments and storage accounts.By grouping the services used by your application in a affinity group in  a particular geographical location, you can improve your service performance
  • Storage account endpoints: Highest namespace for accessing the tables, queues and blobs in a storage. Default endpoints will have the following values
Blob service: http://mystorageaccount.blob.core.windows.net
Table service: http://mystorageaccount.table.core.windows.net
Queue service: http://mystorageaccount.queue.core.windows.net


  • Storage account URLS: URls for accessing an object in a storage account For eg: http://mystorageaccount.blob.core.windows.net/mycontainer/myblob.
  • Storage access key: This is the 512 bit access key generated by Windows Azure when you create a storage account. there will be two keys, primary and secondary. You can choose to regenerate the keys at a later point if required

Blobs:   Blobs are mainly used to store large amount of  unstructured data . All blobs must be created inside a container, there can be unlimited number of these in an account. There can be two types of blobs- Page blobs(maximum size of 1TB) and block blobs(maximum size of 200GB)

Tables: Tables are used to store structured but non-relational data. It is a NoSQL datastore that can service authenticated calls from inside and outside of Windows Azure cloud.Table is a collection of entities, but it doesnt force a schema on the entities.This means that the a single table can have entities with different set of properties. Entity is a set of property, similar to a DB row. It can be upto 1 MB in size. Whereas Property is a name-value pair. An entity can have upto 252 properties for storing data. Each Entity will have three system defined properties ie apartition key,row key and a timestamp

Queues:  It is a service for storing messages, that can be accessed using authenticated http or https calls. A single queue message can be upto 64 KB in size. It can have millions of messages , limited only by the maximum storage capacity. It is mostly useful in scenarios where there is a backlog of messages to be processes asynchronously or to pass messages from the Windows Azure web role to a worker role.

Windows Azure host and guest OS updates

Windows Azure host OS is the root partition, which is responsible for creating child partitions to execute Windows Azure services and guest OS. The host OS is updated atleast once in a quarter to keep the environment secure. Updating the Host OS means that the VMs hosted in it should be shutdown and then restarted. While the upgrade is done, Azure ensures that the VMs in different update domains are not down simultaneously thereby affecting the availability of hosted applications. An optimal order of updating the servers are identified first before proceeding with the upgrade.

Windows Azure guest OS runs on the VMS that host your applications in Azure. The OS is updated periodically when each time a new update is released. You can choose to get this done automatically or manually upgrade it at a chosen period.Microsoft recommends automatic OS updates, so that known security vulnerabilities are taken care of and you application will run on an up-to-date environment.

Inorder to configure your Guest OS for automatic OS updates, you need to edit the ServiceConfiguration element in the cscfg file as follows

<ServiceConfiguration serviceName="RM.Unify.Launchpad" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"  osFamily="2" osVersion="*" schemaVersion="2012-05.1.7">

osVersion="*"  defines that the OS should be updated automatically

PS: The different OS families are identified by the OS family number and should be read as follows

Windows Server 2008 SP2 - osFamily 1
Windows Server 2008 RS - osFamily 2
Windows server 2012 - osFamily 3

Configuring Diagnostics for Windows Azure cloud service

Steps for configuring the Windows Azure diagnostics are as follows:

  • Import the Diagnostics module in the csdef file
    <Imports>
      <Import moduleName="Diagnostics" />
    </Imports>
  • The option for tracing and debugging can be included in the Windows Azure application code
  • Custom performance counters can be created for web and worker roles using powershell scripts in startup tasks. You can collect data from the existing performance counters as well
  • Store dignostics data in an Azure storage, since the collected data is only cached and hence does not perisist. The diagnostics storage can be defined in the cscfg file using the following settings
<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=storagename;AccountKey=storageaccesskey" />

Replace the storagename and storageaccesskey using the name and access key of your diagnostics storage

Tuesday, September 17, 2013

Input and Internal Endpoints in Windows Azure

Azure cloud services had two types of environments- Production and Staging. The production environment will have  permanent DNS name associated with it and it resolves to a Single Public Virtual IP. The DNS name of Staging environment will keep changing and it will also resolve to a Public VIP.

Intput endpoints are defined for enabling external connections to the Public VIP of the cloud service. HTTP,HTTPS or TCP  protocol can be used for connection. The ports , protocols and certificates to be used for the connection can be defined in the csdef file in the <Endpoints> configuration session. Sample given below

    <Endpoints>
      <InputEndpoint name="httpsin" protocol="https" port="443" certificate="SSL" />
      <InputEndpoint name="httpin" protocol="http" port="80" />
    </Endpoints>


  • Each defined endpoint must listen on a unique port
  • A hosted service can have upto maximum of 25 input endpoints, that can be distributed among roles
  • Azure load balancer uses the port defined in the config file to make sure that the service is available in internet
Internal endpoints are used for role to role communication. Again maximum of 25 endpoints are available per hosted service. When you define the internal endpoint, the port is not mandatory. If port is not defined, Azure fabric controller will  assign them


   <Endpoints>
         <InternalEndpoint name="InternalHttpIn" protocol="http" port="1000"/>
      </Endpoints>


Configure RDP for Windows Azure cloud service instance


 In order to RDP to a windows azure cloud instances execute the steps given below:


  • Generate an encryption certificate and upload to the respective cloud service. This certificate is used to encrypt the RDP communication
  • Encrypt the RDP password using teh certificate thumbprint. You can use the csencrypt command line utility available with the Windows Azure SDk to encrypt the password- Ref: http://msdn.microsoft.com/en-us/library/windowsazure/hh403998.aspx
  • Import the RemoteAccess and RemoteForwarder modules in the csdef file
    <Imports>
      <Import moduleName="RemoteAccess" />
      <Import moduleName="RemoteForwarder" />
    </Imports>
  • Update the Remote desktop connection configuration values in the cscfg file. The settings are

<Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true " />
<Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value=" " />        <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="" />
<Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2014-06-27T23:59:59.0000000+05:30" />
<Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" />
  • In the above settings the value of username and encrypted password should be updated
  • The cscfg file updated with the above settings can be deployed to the cloud service along with the cspkg file
  • Once the deployment is completed, login to Azure management portal-> cloud service-> Instances. Select the instance you want connect using RDP, and click on connect in the bottom menu
  • An RDP file will be downloaded,which you an open/save and then use the username password provided in the .cscfg file to connect to the selected instance
  • In case you need to reset the password, go to cloud service->configure and select Remote on the bottom menu.You will get options to enable/disable RDp, set new password, select the certificate, update the expiry date etc

Windows Azure cloud services - Roles and config files

Windows Azure cloud service, is in simple terms an application designed to be hosted in a Cloud with a configuration file that defines how the service should be run.

Two files decide the settings for the cloud service - Service definition  file(.csdef) & Service configuration file (.cscfg)

Service definition file:

This file defines the settings that will be sued for configuring a cloud service. It defines the following settings
Sites - Definition of websites or applications hosted in IIS7
InputEndPoints - End points used for contacting the cloud service
InternalEndPoints - Endpoints for role instances to talk to each other
Configuration Settings - Settings specific for a role
Certificates - Defines certificates used by a role
Local Resources - Details of local storage, this will be a reserved directory in the file system of the virtual machine in which a role is running
Imports - Defines the modules to be imported for a role. For eg: to enable the RDP connection to a VM, we need to import the modules RemoteAccess & RemoteForwarder. To enable dignostics, we need to import the module named Diagnostics
Startup - used to define startup tasks that will be executed when the role starts

The Service definition file is packages along with the application in the .cspkg file used for creating/updating a cloud service

Service configuration file:

The values of the settings defined in the service definition files are updated in the service configuration file. For eg: The number of role instances,Remote desktop settings like username, encrypted passwords and other application specific configuration values. This file is uploaded separately and is not included in the application package. We can also change the values even when the cloud service is running

Cloud service roles:

Two types of roles are supported in Windows Azure cloud service

Web role: It is a role customized for web applications.If you select this role type IIS 7 comes pre installed with the VM. It is most commonly used for hosting the web frontend.

Worker Role: This role is mainly used for the background processing for web role. Long running processes or intermittent tasks should be configured to be executed in this role


Friday, September 13, 2013

Back to basics : Networking - Part 1

Range of different classes of IP addresses:

Based on the range of first octet
Class A:  1-126
Class B:  128-191
Class C: 192-223

Private IP ranges

Class A: 10.0.0.0 to 10.255.255.255
Class B:172.16.0.0 to 172.31.255.255
Class C: 192.168.0.0 to 192.168.255.255

APIPA address: 169.254.0.0 to 169.254.255.255

MAC address:

Media access control addree is associated with a  network adapater, often known as hardware address

12 digit hexadecimal, 48 bits in length

Written in format- MM:MM:MM:SS:SS:SS

First half is address of the manufacturer and second half is serial number assigned to adapter by manufacturer

MAC address work at layer 2, Ip address at layer 3

OSI Model:(Open System Interconnection)

 Physical: Defines the physical media ie cables, connectors etc

Data Link: defines data format.Converts raw bits from physical layer to data frames for delivery to network layer. Common devices that work at this layer: Switch

Network layer: Addressing, determining routes,subnet traffic control etc. IP addresses are added at this point and data at this layer is called packet. Common device at this layer: Router

Transport layer: End-to-End message delivery. Reliable and sequential packet delivery through error recovery and flow control mechanisms. uses mechanisms like Cycle redundancy checks, windowing and acknowledgement: Eg: TCP  & UDP

Session Layer: manager user sessions and dialogues. Controls establishment and termination of logical links between users: Eg: Web browser make use of sessions layer to download various elements of a web page from a web server

Presentation layer: Encoding, decoding, compression, decompression, encryption, decryption etc happens at this layer: Eg: conversion of .wav to mp3

Application layer: Display data and images to human recognizable format. Eg: Telnet, FTP etc


Reference: http://www.inetdaemon.com/tutorials/basic_concepts/network_models/osi_model/osi_model_real_world_example.shtml


Tuesday, September 10, 2013

DHCP superscope


DHCP superscopes is in simple terms a logical grouping of DHCP Scopes. They  are used in scenarios where there are multiple subnets created in a particular Vlan. In this case, your Vlan configuration would look like this:

Interface vlan 107
ip address 10.120.12.1/24
ip address 10.120.13.1/24 secondary
ip address 10.120.14.1/24 secondary

Create scopes for all the above subnets in your DHCP , then create a superscope and add the scopes to it.

The ideal case is to have one subnet per Vlan and to create individual scopes in DHCP for these Vlans. You will have to configure IP helper address for these Vlans and point them to your DHCP IP address so that the clients in various subnets get IPs from the DHCP. Your Vlan configuration would look like this (assume that the Ip of your DHCP is 10.120.12.3)

vlan 12
interface vlan12 ip address 10.120.12.1/24
vlan 13
interface vlan13 ip address 10.120.13.1/24
ip helper-address 10.120.12.3
vlan14
interface vlan14 ip address 10.120.14.1/24
ip helper-address 10.120.12.3

Here we have created a virtual interface (at layer 3), which can do inter-VLAN routing. DHCp requests for a Vlan received at the virtual interface is forwarded to the DHCP server 10.120.12.3, after changing the giaddr to the interface IP. DHCP when it receives the request, compares the subnet from the interface with the scopes configures. When it finds a match, the IP allocation process is initiated



Thursday, September 5, 2013

VMware data recovery troubleshooting

If the VDP backup fails , the following troubleshooting steps can be used

  1. SSH to the the VDP appliance and browse to the /usr/local/avamarclient
  2. Search for logs related to the VM :   grep -r -a "VM_NAME" ./*
  3. If you suspect it is snapshot related issue : grep -r -a " VM_name" ./* | grep "FATAL"
  4. To be more specific and to check messages for a certain date, try searching using the date : grep -r -a " VM_name" ./* | grep "2013-08-02"
  5. Sometimes we could get very useful information from the "info" messages as well. Inorder to narrow down to the same, you can use the command: grep -r -a "VM_name" ./var-* | grep "2013-07-03"
  6. The baove command will search only through the 'var-proxy' directories. It will display the entire log file. You can less it to view details for a specific date eg: less ./var-proxy-5/VMGROUP1-1378306800496-35fj52c29f48eeejef090b27edaeba3d868719e8-4016-vmimagew.log
    /2013-07-03 07:10:00

Error messages:

Message 1:
avvcbimage FATAL <16018>: The datastore information from VMX '[STORAGE-1] VMNAME_1/VMNAME.vmx' will not permit a restore or backup. 

Reason: The most common reason is that a snapshot file is present but it is not getting displayed in the snapshot manager.Inorder to resolve this, 
  1. SSH to the esx hosting the VM 
  2. Browse to the VM's datastore :cd /vmfs/volumes/datastore_name/VM_name/
  3. Check if there are any delta files in it ie files with -delat in name or -00001 etc
  4. Now check if any of these files are in use by checking the vmx file : grep "vmdk" ./*.vmx
  5. If the files are not being referenced in the vmx, we can safely delete or move the delta files to a temp directory: mkdir 0ld-delta-files ; mv vm_name.000*.vmdk old-delta-files/
  6. Confirm that the files have been deleted
Message 2:

avvcbimage FATAL <14688>: The VMX '[STORAGE-1] VMNAME_1/VMNAME.vmx could not be snapshot.

Reason:One possible reason is that you execute a backup and it overruns the scheduled backup in VDR

Message 3:
2013-07-03 17:00:57 avvcbimage Info <14642>: Deleting the snapshot 'VDP-137830742335fc52c29f98eeebef090b22edaeba3p868716e8', moref 'snapshot-17946'
2013-07-03 17:00:57 avvcbimage Info <0000>: Snapshot (snapshot-17946) removal for VMX '[STORAGE-1] VMNAME_1/VMNAME.vmx  task still in progress, sleep for 2
sec
2013-07-03 17:00:57 avvcbimage Info <0000>: Snapshot (snapshot-17946) removal for VMX '[STORAGE-1] VMNAME_1/VMNAME.vmx task was canceled.

2013-09-04 17:00:57 avvcbimage Info <0000>: Removal of snapshot 'VDP-VDP-137830742335fc52c29f98eeebef090b22edaeba3p868716e8' is not complete, moref 'snapshot-17946'

Reason:This is because VDP doesnt get enough time to delete the snapshots created during the backup operation.Solution is to change the timeout value to allow enough time for snapshots to commit.

To increase this timeout value:
1.Open an SSH session to the VDP server.
2.Change to the /usr/local/avamarclient/var directory using this command:
# cd /usr/local/avamarclient/var
3.Open the avvcbimage.cmd file using a text editor. For more information, see Editing files on an ESX host using vi or nano (1020302).
4.Add this entry to the file:
--subprocesstimeout=600
5.Restart the avagent service using this command:
# service avagent restart

Reference: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2044821

Thanks to my colleague Tom for his valuable inputs for this article