Sunday, July 3, 2016

DRaaS using Azure: How to protect your on-prem physical machines.... #MyAzureLabs

BC/DR is a key consideration for all organizations big or small. Thanks to Azure, we now have an affordable and easy to implement BC/DR solution . Azure site recovery service(ASR) can be used for a multitude of disaster recovery scenarios, with an economic pay-as-you-go costing model. The DR scenarios catered to by ASR currently are:

DR site in Azure
- Physical machines to Azure
-VMware environment to Azure
- Hyper-v(with or without VMM) to Azure

DR site in a secondary DC, and orchestration by ASR
-VMM site to site
-VMware/Physical to VMware
-VMM to VMM(SAN replication)

This week in my Azure labs, I tried out the first scenario, ie DR from On-Prem Physical machines to Azure. This blog is all about my little experiment and some tips and tricks that I learned during the same.

The following link , which explains the procedure for protecting Physical/VMware environment is a good starting point: https://azure.microsoft.com/en-in/documentation/articles/site-recovery-vmware-to-azure/

I will use this article as reference point, which is very detailed and well written. I will be going into further more details on few of the areas mentioned in the link .Based on my experience,  I think it will be helpful for someone trying to set up a Physical server to Azure replication for the first time.

You should ensure that the prerequisites for physical server protection mentioned in the link are taken care of. You need to run the Site Recovery Unified SetUp for installing the configuration and process server. Refer to "Step 2: Set up the source environment" in the link above for details on initial set up of the vault, setting up configuration server, registering it in Azure etc. These steps are  pretty straight forward. Detailed explanation of the configuration server setup on-prem  is also mentioned in  Step 2 of the article

Lets assume that you done the initial vault creation , configuration server setup , created target environment in Azure(Resource group, storage, network etc) and have also created the replication policies to be used. All these come under "Step 1: Prepare your infrastructure" in your site recovery vault. These steps are again clearly explained in the official documentation :  https://azure.microsoft.com/en-in/documentation/articles/site-recovery-vmware-to-azure/

Now lets see what needs to be done at the physical server end to enable the protection

Steps to be done on Physical server:

1)Set up the registry key entry






2)Enable the following in Allow an app or feature through Firewall.
    > File and print sharing
    >Windows Management Instrumentation






3)Add an account that has admin privilege in the target physical machine in the cspsconfigtool.  It can be found in the following location in the configuration server






















Click on Add account

4)In my case, the physical machine was not added to domain. Hence I added a local admin user. The friendly name can be anything, it is just for identifying that account in Azure portal.






















5)Now you can install the mobility agent on the physical server. The installer can again be found in the configuration server at the following location. You need to select the installer based on the operating system type. In my case I selected the Windows installer










Select option to install Mobility service
















Enter Configuration server IP and Passphrase











Specify install location. That is all that is required. You can go to the next step and wait for the installation to be complete





Steps to be done in Azure portal:

Now that the mobility agent is installed, you can refresh the configuration server in the Azure management portal

Go to <recovery services vault> -> Settings->Site recovery infrastructure->Servers
select the configuration server and click refresh server












Click ok on the message and wait for the refresh to be completed.

Once the refresh is completed, ideally the new physical server will be reflected in the connected agents list













Now you can go ahead and enable replication for your physical server. In the Management portal, go to <Recovery services vault>->Settings->Site Recovery->Enable replication





















Enter the source. This will be your configuration server .Machine type will be Physical machines and Process server in this installation is same as configuration server
















Configure the target environment in Azure.


You need to select the target physical server at the next step. Click on the + sign
















Enter details of your on prem physical server, ie server name , IP and the OS type









Click ok and wait for the server to be added
Once the server is added, it will be listed in the blade. Select the server and click ok

















In the next step configure properties. If the agent is installed correctly and is detected by the portal, you will be able to select the disks that you want to backup . ie , disks other than the OS disks
From the account dropbox you can select the account that you had created earlier in the cspsconfigtool.(Refer step no: 3)








In the configure replication settings page, select the replication policy that you had created earlier



















Now all the steps are done, and you can click "Enable replication" to protect your on-prem physical server
























You  can click on notifications to see progress of the task. You can also go to <site recovery vault>->Jobs->"Site Recovery Jobs"-> and select the "Enable protection" job to see the status



















If you see all green ticks, your machine protection is enabled . You can see the status of replication from  site recovery vault>->Replicated Items. Once the replication is completed, the status will be shown as protected











Now that the Physical server is replicated and protected, we might want to test if everything will work as expected during a Disaster. Right? That is where the Test failover feature will help. I will cover that in my next blog post. Keep watching this space for more!!








Friday, June 17, 2016

The cloud has got your back(up): A primer on Azure Backup

 Azure backup offers a comprehensive cloud based  hybrid backup solution that enables backup of not only your Azure VMs, but your files, folders, applications etc both on prem and in Azure. This solution can be used to replace your chunky on-prem backup solutions,  tape drives , backup tapes and the likes. In this blog, I will give a brief overview of the Azure backup service, its advantages and scenarios that it can cater to currently

Service highlights:

Azure backup is offered as a complete backup as a service offering. Lets take a look at few highlights of the service

Cost effective

You need not own any backup infrastructure, say services , tools and devices to use this service. You can directly subscribe to the service and pay based on your usage. There are no additional compute charges involved in the service. You pay a fixed charge for each protected instance, and also for the storage that you consume in the cloud for storing your backup data. The egress traffic for restore is also free, in addition to the free ingress backup traffic to cloud. Only the first backup is full backup  The data being backed up from on prem and transferred to Azure is compressed before the transfer . This will reduce the storage space used in azure for storing the backup, thereby reducing the storage cost

Resilient

 It offers the flexibility of centralized backup operations management from the cloud. Since the backup is stored in the cloud, you can leverage its unlimited scale and high availability capabilities.The backed up data can be stored in either an locally redundant storage or a globally redundant storage. LRS will keep three copies of your data in a given location, and will be suitable for cost conscious customers. GRS , in addition to the 3 local copies will store 3 additional copies in a different geography. This provides additional resiliency incase of an Azure site level disaster

Secure

Ample emphasis is given on the security aspect as well while designing the service. The backup data is encrypted using a passphrase that will be available only locally. The data in transit and rest is encrypted. Only an administrator who possesses the passphrase can decrypt the data.

Consistent

The backup data can be application consistent, file consistent or crash consistent depending on your backup scenario . Application consistent backups in windows ensure that you need not do additional fixes in your application when you restore it. This greatly reduces the recovery time in case of a disaster. This makes use of VSS technology in windows. Since VSS is not present in Linux, backup of Linux machines will be file consistent. Crash consistent backups are those backups taken when your machine is shutdown

Long term retention

You can store the backup data in cloud for as long as 99 years!!


 Backup scenarios

When you sign up for azure backup service, you will first create a backup vault in the cloud. It is nothing but a storage space for your backup. You can choose LRS or GRS storage depending on your resiliency preferences. Azure backup makes use of different components in different backup scenarios. For eg, file and folder level backup needs a different tool than a VM level backup. Let us take a look at the different components of Azure backup


Azure backup agent

This is a standalone agent that can be installed for taking file, folder and volume level backup on a Windows OS. The machine can be physical or virtual and can reside either on-prem or in Azure. You can download the agent from the management interface of you backup service in Azure and install in target location. The agent should be registered with the vault using a vault credentials. Also a passphrase is created during the installation that will encrypt data in transit and at rest. You can restore the data to either the same machine or to a different machine. You will have to provide the passphrase to initiate the restore process

System center Data protection manager + Azure backup agent

System center DPM can work in conjunction with the Azure backup agent to backup your workload to Azure. It supports all major MS workloads like SQL, SharePoint, AD, Exchange etc in addition to file/folder backup and VM backups.This option is more suited for customers who already have an investment on system center suite of tools. They can install the backup agent in the DPM server and take backup of files, folders, VMs and applications to Azure. The DPM can be hosted either in on-prem or in Azure. It also supports VM level backup of Linux machines hosted in Hyper-V. It makes use of app aware VSS snapshots to ensure consistency of backed up data.


Azure backup server

This can be considered as a stripped down version of the DPM option. It provides all the functionality of DPM+ Backup agent, except the following
- It doesn't not need a system center integration
- Tape drive is not supported
- An azure backup subscription is required

Azure backup server supports pretty much all workloads supported by DPM. If you don't want to backup to cloud, you can even use it for an on-prem disk to disk protection. You can consider it a subscription based backup service where you are charged based on the number of protected instances. If you are backing up to cloud you will be charged for the cloud storage as well

Azure IaaS VM backup

This is very straight forward VM level backup of VMs that you host in Azure using the backup service. You can backup both Linux and Windows VMs using this service with no additional agent installation


That is Azure backup in a nutshell. You can refer the official Azure documentation here to understand more about each scenarios and the service capabilities

Keep watching this space for more articles on Azure!!!





Wednesday, June 1, 2016

Azure VM migration using PowerShell


Microsoft recommends usage of ARM for all new deployments in Azure. All new developments/features/services will be available in ARM going forward.  But  there are lot of services that are yet to be migrated to ARM. What if one of the services that you want to use is not currently available in ARM and you have already set up rest of your environment in ARM?  In such a scenario, you can always set up a site to site VPN between the classic V1 VNET and the ARM VNET. This process is also well documented:

 
That being the case,  what if we want to test the interoperability of services and you want to move few already set up VMS in ARM to classic? I know that it is not a very common scenario. Also it is not a recommended approach for production deployment, ARM is definitely the way to go. However, for enabling that test run you might very badly want to do before taking the plunge, we will look at the process of creating a new VM in classic portal from a hard disk of VM in ARM portal using Azure PowerShell.

First you will have to login to your Azure account

 Login-AzureRmAccount

Enter the Source blob uri, ie location of the ARM VM's VHD

$sourceBlobUri = https://<Source-storagename> .blob.core.windows.net/vhds/<vhdname>.vhd

Set the Source context

$sourceContext = New-AzureStorageContext  –StorageAccountName <Source-Storagename> StorageAccountKey <Storage access key>

In the destination context, give the name of your classic storage and its key

$destinationContext = New-AzureStorageContext  –StorageAccountName <dest-storagename> -StorageAccountKey <Storage access key>

Copy the vhd to the destination storage

Start-AzureStorageBlobCopy -srcUri $sourceBlobUri -SrcContext $sourceContext -DestContainer "vhds" -DestBlob "rds1201647182929.vhd" -DestContext $destinationContext

This command will copy the VHD from the source storage to the container named 'vhds' in the destination storage. Ensure that your VM is in a stopped deallocated  status during this procedure. Also if you want to The copy over took only a few minutes in my experience

 Now we need to add this VHD as an OS disk in the gallery. Start with importing the publish settings file of the subscription

 Get-AzurePublishSettingsFile

Download the publish settings file and import it

 Import-AzurePublishSettingsFile  '<Publish settings file name>'

Set the current subscription

Set-AzureSubscription -SubscriptionName "<Subscription name>"

Now add the OS disk

Add-AzureDisk -DiskName "OSDisk" -MediaLocation "https://<Source-Storagename> .blob.core.windows.net/vhds/<vhdname>.vhd" -Label "My OS Disk" -OS "Windows"

Refresh Azure classic portal

OS disk will be listed in the gallery. Now you can go ahead and create new VM from the disk!!

Note:  The reverse of what is explained here is also possible, that is Migration from classic to ARM.  There are well documented tools available for the same: https://github.com/fullscale180/asm2arm


Ref: https://azure.microsoft.com/en-in/documentation/articles/storage-migration-to-premium-storage/

Sunday, May 29, 2016

Azure automation:Using Graphical runbooks



Azure automation can be an Azure administrator's best friend and can  ease up your day to day administration work. There are three options available in Azure automation- Graphical runbooks, PowerShell Work flow and Powershell based runbooks
 
If you want to play around with Azure automation and want to quickly automate some daily mundane tasks, graphical runbooks are the easiest to start with . You can find many templates in the runbooks gallery in Azure that can easily get the job done easily  for you. Lets start with the basics. One of the common tasks that needs to be done is to start or stop VM at a scheduled time, say for eg: your Dev/Test machines that should be shutdown after office hours. The runbooks for this are readily available in the gallery. In this blogpost we will focus on a graphical runbook available in the gallery that can be used to start or stop VMs at a scheduled time.
 
                 Schedule automated start and stop of VMS using graphical runbook

Create new Azure automation account from portal using the default settings. Go to New->management->Automation->new automation account
 
 
You will get a confirmation  message as follows
 

Browse Automation accounts and select the newly created account. Click on runbooks tile->Browse gallery
 

 
 
 
 
 
 
Search for  the graphical runbook  “Start Azure V2 VM”. For stopping you can use the graphical template “Stop Azure V2 VM”

 










Import the template









     

 




 

 

 
 


 Click on edit and then “test pane” to do a test run of the template
 
 
 
 
 
 

 



If you want to start a single VM, give the name of the VM as input parameter. Click on start to do a test run

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

Publish the runbook to make it available in your automation account
 
 
 
 

 

Now you can add a schedule to the runbook
 
 
 
 
 
 
 



Configure the time when you want the schedule to run

 
 
 
 
 
 
 
 
 
 
 
 



Note that the time of the schedule here will depend on your local time

 
 
 
 
 
 
 



Configure input parameters for the schedule, for eg: the VM name or resource group name.
If you want to restart a single VM in a given resource group, specify the name of the resource group as well as the VM name. If you want to start all VMs in a resource group, give the name of the resource group alone
 




 
 
 
 
 
 
 
 
 
 
 

 


 
 
 
 
 

 

Sunday, September 27, 2015

Decoding Docker - Part 3 : Docker files

Hope you have gone through the Part1 and Part2 of  my blog series on Docker. In this post on my Docker series, I am exploring the creation of Docker images

There are multiple ready made docker images available in Docker hub, which you could simply pull and use. However, what if you need a different combination of software versions than what is ready available. Simple, you create an image of your own with all required softwares installed

There are two options to do this.The easiest option is to pull a base image and install everything you want, commit the container as a new image. What if you would like to make some changes down the line? You may have to redo the whole thing. That is where the option to create image using docker file helps. You can simply write a docker file to install and configure the required software. If you wish to make some changes at a later point, you could  edit the docker file and build a new image

In this example , I will explain the process of creating an docker image with apache installed, with the relevant ports opened in it. Lets start with creating a docker file. It is a simple text file with the name "Dockerfile"

#vi Dockerfile

We will be using the Centos base image, so the first line of the docker file explains which image to use

FROM centos

Now lets install all required softwares using the run command

RUN yum -y update
RUN yum -y install python-setuptools
RUN easy_install supervisor
RUN mkdir -p /var/log/supervisor
RUN yum -y install which
RUN yum -y install git

Now build the docker file to an image

Docker build –t  custom/base .

Notice the "." at the end. You should run the command from the directory where "Dockerfile" exists. Now you have created your base image. Lets install apache next. Edit the docker file and add the following content

FROM custom/base
RUN yum -y install httpd
ADD supervisord.conf /etc/supervisord.conf
EXPOSE 22 80 
CMD ["/usr/bin/supervisord"]

We have installed  supervisord in the base image to manage the processes within the container, point in case the apache service. Now, lets write a supervisord config files to start the service on container startup

vi supervisord.conf

Add the following content

[supervisord]
nodaemon=true

[program:httpd]
command=/bin/bash -c "exec /usr/sbin/apachectl –k start"

Run dockerbuild to create the image

Docker build –t  custom/httpd .

Now lets spin up a container from the image

sudo docker run -p 80:80 -v /root/htdocs:/var/www/html -t -i custom/httpd

Note: You can create a folder named /root/htdocs and use the -v switch to mount this folder at  /var/www/html of the container, so that the storage is persistent
The -p switch will map the 80 port of the container to 80 port of the host



Tuesday, September 1, 2015

Decoding Docker - Part 2

                                     


                                            Docker Remote Registry

Continuing the blog series on my trysts with docker, in this installment we will look into the details of how to set up a docker remote registry. Hope now you have an idea on how to get Docker up and running , if not go ahead and read the first part of my blog series here

Now that we have docker engine up and running, and  few containers spinned up in it we might very well think about a centralized docker image repository. Of course we have Docker hub, and you could  save your images there. But what if you want to have a bit more privacy and would like to save all your hard work in house?That is where Docker remote registry comes in handy.

Docker remote registry can be set up in a local machine for centralized storage of docker images. You can pull and push images just like you do in Docker hub.It allows centralized collaboration of people working on docker containers in your firm. For eg: a developer working on a project can save the current status of his container as an image and push it to the remote registry . His fellow team mate could download the image and spin up and container and continue the work. This is just one of the use cases, the functionality is somewhat similar to an SVN repository. However, one major drawback I noticed was the lack of a search/list functionality.

Here is how you can set it up:

Server side configuration:

To start with, you will need a certificate for connecting to the remote registry. Lets create one using openssl in the machine where you plan to set up your docker remote registry:

Monday, August 31, 2015

Decoding Docker - Part 1

Having worked with multiple Virtualization platforms, I recently got an interesting opportunity to work with its younger sibling containerization . The  platform of choice was obviously Docker. Getting Docker up and run in an OS of your preference is a simple task, you can straightaway get it done using the instructions here . Interesting part is  getting to play around with it

 Getting it up and running:

Docker can be started as a services or at a tcp port. Starting as a service is pretty straight forward

#service docker start

However, the interesting bit is when you want to run it as a deamon listening to a specific port. This is useful in scenarios when you want to manage the docker engine remotely, say using a windows docker client or using one of the open source GUIs available for docker like Shipyard and Mist.io

The command to run docker as a deamon listening to a port is

# /usr/bin/docker  -d -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock &

Here docker will listen at all IPs of the machine at port 4243. If you want to connect to this docker engine from a remote docker client, the following command can be used

#docker -H tcp://<docker engine host>:4243 <commands>

For eg: #docker -H tcp://<docker engine host>:4243 ps

One downside of this method is that there is no inherent authentication mechanisms for remote access

Spin up your containers:

Lets start with pulling an image from the Docker hub, which is  a public repository of Docker images

Friday, May 1, 2015

Cloud security - CSA domains

This is the second post in the blog series on Cloud security. You can see the first blog post here

The Cloud security alliance group provides actionable best practices for businesses to transition to cloud services while mitigating the risk involved in doing so. As per the latest version of CSA guide The critical areas of focus in cloud computing is divided into fourteen domains



Saturday, April 18, 2015

Cloud Security - Risk factors

Cloud security is a major consideration for enterprise wide cloud adoption, especially public cloud. This is part 1 of a serious of blog posts , where I am planning to pen down the different dimensions of Cloud security, starting with the risk factors of cloud adoption.

The various attributes of security risks  involved in the process can be summed up as follows:


ENISA* recommends the following  risk areas to be taken into account, while embarking on a cloud adoption journey

Thursday, October 16, 2014

OpenStack icehouse installation error : nova-api service getting stopped

While trying to install OpenStack icehouse, faced an issue with nova-api service.It was not getting started. The following error was coming up in the Nova-api log

Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-save -c
Exit code: 1
.......

 nova Stdout: ''
2014-10-17 07:21:08.058 27270 TRACE nova Stderr: 'Traceback (most recent call last):\n  File "/usr/bin/nova-rootwrap", line 6, in <module>\n    from oslo.rootwrap.cmd import main\nImportError: No module named rootwrap.cmd\n'


Problem was with one of the oslo.rootwrap module. It was broken

Solution is to upgrade the module using pip

 #pip install oslo.rootwrap --upgrade