Thursday 21 November 2019

Vembu Office 365 & G Suite Backup v2.0 Beta now available

Vembu recently announced Vembu Office 365 & G Suite Backup v2.0 (Beta) this beta release is packed with new features, enhancements and is reliable and easy-to-use backup & recovery solution to protect the critical user data in Office 365 & G Suite.
  • Vembu Office 365 Backup – Protect your Exchange Online (mail, calendar, contacts) & OneDrive
  • Vembu G Suite Backup – Protect your Gmail, Calendar, Contacts & Drive

New Features:

OnPremise Deployment

From v2.0, Vembu provides On-premise deployment support for Office 365 and G Suite Backup. This allows us to protect the Office 365 & G Suite data by backing up them to local storage.
Vembu on-premise installer for Office 365 Backup & G Suite Backup is available for Windows OS and comes with a 30-day FREE trial.
File Retention
File Retention lets us retain multiple versions of files that are backed up from OneDrive & Google Drive. The default retention count is 5 (i.e, 5 versions of a specific file will be retained) and the maximum retention count can be set up to 99.
OneDrive support for all users
With v2.0, OneDrive backup is supported for all the Office 365 users under the domain account. While the previous version supported OneDrive backup only for tenant admins.
Key Highlights of Vembu Office 365 & G Suite Backup:
We will now be able to backup Office 365 & G Suite user data to local storage (using the On-premise installer) and to Vembu Cloud (using as per our requirement.
  • Backup the entire domain account or at a user level
  • Set the backup schedule and run automated daily backups (incremental)
  • Flexibility to restore the data at any point in time
  • Restore the user data to same/different account
  • Reports are generated for all state of backup & restore activities

Download/Signup to try the 30-day free trial!

Backup Office 365 & G Suite data to local storage – 
Backup Office 365 & G Suite data to Vembu Cloud –

Tuesday 12 November 2019

Best Practices for Protecting your Data Center from Disaster

No organization can afford a sudden disaster and a data-conscious will always find optimal strategies to make sure that the Business Continuity is not affected and critical data of their customers is not lost. 

The webinar will cover:
  • Creating a checklist of data-center objectives
  • Assess the risks to data and operations
  • Develop a backup and DR plan
  • Implementing the industry best practices

Friday 27 September 2019

Vembu BDR Suite Free Edition - Now available with Full-Feature Backup support for up to 10 VMs

What does Vembu BDR Suite – Free Edition offer?
  • Unlimited VM Backup – Full-feature for up to 10 VMs and limited features for the rest of the VMs 
  • Unlimited Physical Windows Server Backup
  • Unlimited Windows & Mac Workstations Backup
  • Agentless VM backup
  • Near Continuous Data Protection (15 mins RPO)
  • CBT based incremental backup
  • Application-aware backups
  • Quick VM Recovery (RTO<15 mins)
  • Granularly recover Files & Application items

Know more about Vembu BDR Suite – Free Edition

Vembu BDR Suite v.4.0.2. 30-day FREE TRIAL

Tuesday 5 March 2019

How to Configure VMware vSphere HA Orchestrated Restart

We have been using VMware vSphere High Availability feature from a long time now wherein we add ESXi host to the HA cluster and let the election process begin to ensure we have one of the host as the Master host and rest other behave as Slave host, with VMware vSphere HA virtual machines achieve better protection against any unplanned downtime be it an Application Failure, Guest OS Failure, ESXi host failure, Network Failure and Datastore failure.

VMware vSphere HA not just provide the protection against unplanned downtime but with admission control policies it also ensure that enough resources are available within the cluster to power on the Virtual Machines which means cluster should not only meet the reservation requirements of the Virtual Machine’s during host failures but it should also meet the overall allocation requirements for the Virtual Machine.

vSphere HA calculate the resources based on the percentage of cluster resources rather than using the old default slot size calculations. This was one of the improvement made available to vSphere HA admission control in VMware vSphere 6.5 wherein the default option for defining failover capacity is set as Cluster resource percentage with a possibility to choose other options including slot size or disable it completely.

Prior to VMware vSphere 6.5 vSphere HA was only concerned about securing the resources required for the virtual machines and restarting them, and we use VM override option by defining the VM restart priority (Low, Medium High) in which they are going to secure these resources, however this seems to be a problem when we have a 3 tier Virtual Machines and when there are plenty of resources available for every VM.

In this case the VM will start receiving the allocation more fastly and will start powering On and there could be a possibility that Database server may take more time as compared to the application server and in this case the VM will fail to power on because it can’t access the database. Well the problem with this approach was that despite of the fact the restart priority was configured there was no mechanism to figure out VM readiness.

The above issues can be further fixed by using vSphere HA Orchestrated restart which performs a number of check to ensure Virtual Machines resources are secured and also focuses on Virtual Machine readiness.There are various checks performed like 1) VM has resources secured will be first check perform by vSphere HA ensuring that VM has CPU and Memory reservation requirements on one of the host in the cluster, 2) VM is powered On 3) Wait for VMware tools heartbeats ensuring that the operating system has started within the Virtual Machine and VMware tools application heartbeat is detected  and at last 4) VMware Tools Application Heartbeat confirmation ensuring that the VM is ready and the services are now available.

While configuring vSphere HA Orchestrated restart VM’s can be grouped into tiers representing their startup priority wherein the priority 1 tier virtual Machines will receive the resources first and will be powered on and after all the VM’s defined in this tier met their restart condition vSphere HA will move on to Priority 2 VM’s. Restart tier dependency is a soft rule means we have the option to configure the timeout value ensuring if one of the VM in tier 1 is problematic then at least other VM’s can be started.

The default VM restart priority like Low, Medium and High which has been there in the older VMware vSphere versions there are two new restart priority i.e lowest and highest which has been added with vSphere 6.5, however the restart priority defined will have no effect on the agent VM’s and Fault Tolerance Secondary Virtual Machines wherein the agent VM’s will always be given the top most restart priority.

We can always configure dependencies for Virtual Machines either within the same tier or between VM’s in different tiers wherein the VM will not power on until the VM on which this VM is dependent has been started. Unlike the restart priority which are soft rules i.e timeout period was allowed before proceeding further the Virtual Machines dependencies are Hard rule which means if the first VM doesn’t meet it’s restart condition HA will not start the second VM.

Wednesday 27 February 2019

What do you expect from a free edition backup software ?

Recently i got to know that Vembu is running a survey wherein we can participate and tell them about our expectation from a free edition backup software, and i was quite impress to know that Vembu is really doing this because it’s really important to understand the user expectations so as to groom your product accordingly which not only meets the demands of the current but also the upcoming users. 

Vembu has been offering free edition since 2016 for VMwareHyper-V, Windows Server, Windows Workstation and are constantly improving the user experience of Vembu BDR Suite based on the feedback and requests received from the users.

Vembu Free Edition Survey

I did my part by recently submitted this short survey wherein i highlighted the key features I am expecting from a free edition of the backup software from a wide range of feature available including Application Aware Backup ProcessingAutomatic Backup SchedulingCBT I selected the one which i mostly use and want them to added as part of the free edition ranging from and many more to choose from. 

** Anyone can take this survey by following the link Vembu Free Edition Survey  and let them know about the expectations and stand chance to win an Amazon Gift Voucher worth $20 for sharing your thoughts.

Tuesday 19 February 2019

My Google Cloud Platform Study Notes - Part 1

As part of my preparation journey to achieve Google Cloud Certified credentials I have been going through various articles and referring to official Google Cloud Documentation. Alongside i am also working on study notes which would not only help me to reach one step closer to Google Certifications but would also be important in terms of quick reference whenever required. Thought of dedicating a series of articles by sharing these notes which will not only help me but also to those individuals who are planning to start their journey with Google Cloud Platform.

But before we go ahead and do a quick Introduction to Google Cloud Platform it’s important to note that there are various certifications currently offered by Google including Associate Cloud Engineer, Professional Cloud Architect, Professional Data Engineer, Professional Cloud Security Engineer, Professional Cloud Developer, Professional Cloud Network Engineer and G Suite. This blog series will be focussing on core Google Cloud Platform Services which will help us prepare for Associate Cloud Engineer Certification (Good Starting point for someone who is starting their journey in GCP ) and Professional Cloud Architect Certifications (Design, Build and Manage Solutions on Google Cloud Platform) but can also be used as a quick reference to start with other certifications. 
  • Google Cloud Platform offers five main kinds of services: Computing and Hosting (Server-less hosting,PaaS,Containers and Virtual Machines ), Storage (Cloud SQL for MySQL /PostgresSQL database Cloud Datastore, Cloud Bigtable for NoSQL data storage,and Persistent Disks ), Big data (Big Query, Cloud Dataflow, CloudPub/Sub) ,Machine learning (Google Machine Learning Engine) and Networking (VPC,Load Balancing,Cloud DNS, Firewall rules, Route).
  • Server Less Hosting - Server-less Execution environment for connecting various cloud services, Platform as a Service with Google App Engine can helps to scale compute resources required by application automatically,GCP offers Container as a Service built on open source Kubernetes system and Virtual Machines which provides's Infrastructure as a Service capabilties.
  • Cloud SQL is fully managed database service that helps us to setup and maintain the relational Database in GCP, with two choices being offered Cloud SQL for MySQL and Cloud SQL for PostgreSQL, Cloud Datastore and Cloud Bigtable no SQL data storage options and we also have another option of persistent disk on compute engine instances with various options to choose from including Zonal Standard Persistent Disk and Zonal SSD Disk, Regional persistent disk and regional SSD persistent disk, (block level storage) Local SSD, and Cloud Storage Buckets (Object Storage).
  • BigQuery can be used for creating custom schemas to organize our data into datasets and tables, Cloud DataFlow can be used to process batch and steaming data processing tasks. CloudPub/Sub is an asynchronous messaging service.
  • Virtual Private Cloud aka VPC provides networking related services to Virtual Machine Instances running in GCP, we can also make use of Firewall Rules to control traffic for our Virtual Machine Instances, in order to implement more advanced networking features like VPN we can use route in GCP.
  • IaaS offerings provide raw Compute, Storage, and network, PaaS offerings on the other hand, bind the application code that we write to libraries by giving access to the infrastructure our application needs. In the IaaS model, we pay for what has been allocated however in the PaaS model we pay for what we use. For our instances running in GCP we can also implement Network Load Balancing and HTTP/HTTPS Load Balancing.
  • Google Cloud Platform offers various Machine Learning Services wherein we can choose from a variety of API which provides pre-trained models for specific applications or we can build and train by ourself using TensorFlow framework.
  • All Google Cloud platform resources belong to a Project ,Projects are the basis for enabling and using GCP.All the resources we use, whether they're virtual machines, cloud storage buckets, tables and big query or anything else in Google Cloud Platform are organized into projects ,Optionally projects may be organized into folders.
  • A Zone is a deployment area for Google Cloud Platform Resources.Zones are grouped into regions which are independent geographic areas, and we can choose what regions our GCP resources are in. All the zones within a region have fast network connectivity among them.
  • While selecting the region and zone for our compute instances it's worth to distribute resources across multiple zones and region to tolerate any outages, all the zones are independent from one another which means that if there is a failure in that zone it's not going to affect any other zone within the same region.

Sunday 17 February 2019

My VMware User Group - Delhi/NCR Experience

On Saturday 16th Feb 2019 I attended VMware User Group- Delhi/NCR event in Le Méridien Gurgaon organised by Awesome community members, Amit Kumar Jain, Ankur Chopra and Raminder Singh.The VMware User Group (VMUG) is an independent, global, customer-led organization, created maximize members' use of VMware and partner solutions through knowledge sharing, training, collaboration, and events.

The day started with a quick registration followed by the Opening Keynote delivered by Murad Wagh SE Director @VMWare talking about What's new with Compute, Network, Software Defined Storage, Cloud Native Applications and VMware Cloud on AWS.  

Murad Wagh walked us through with vSphere Platinum, Simplified vCenter Topology with Built in PSC, Trusted Platform Module and virtual Trusted Platform Module, vSphere Health which is another powerful feature of VMware vSphere 6.7 to identify and resolve potential issues. Last but not the least vMotion for NVIDIA Grid vGPU Virtual Machines one of the topic which I love to hear about and is close to my heart being a NVIDIA vGPU Community Advisor and an active VDI community member.

Murad Wagh also talked about NSX-T discussing about the various enhancements that are made available from built in wizards for VDS to NVDS and L7 based Apps id for DFW, in the storage space he talked about VSAN relevant VROPS Dashboards, TRIM/UNMAP Integration. He also provided a quick overview of What's New with Cloud Native Apps (Metric Based Monitoring /Analytics with Wavefront) and VMware Cloud on AWS.

The next session was about Network Virtualization discussing current data challenges and how NSX can help to overcome those challenges with a deep dive on NSX-T led by Raminder Singh - Technical Account Manager VMware and Inder - Consultant, wherein they walked us through the complete architecture of NSX-T and also talked about Geneve being used in NSX-T for data encapsulation. 

IT Security led by Pranay Jha who is an Infrastructure Architect and helped us to understand how to improve IT security, what are the various security boundaries, Confidentiality,Integrity, Availability and Data Privacy and Data at rest, Data in Motion and Data in use.

Power of PKS was the last session for the day led by Himanshu Taneja who talked about overview of Containers and also walked us through with the architectural overview of Kubernetes, now it was time of Q/A and some awesome GiveAway (VMware vSAN 6.7 U1 Deep Dive Book by Cormac Hogan and Duncan Epping) 

Well that's me receiving the book from Pankaj Shukla - Senior Manager SE @VMware

Final Thoughts - It was an amazing day with some fantastic sessions from Keynote, IT Security to deep dive on various verticals including NSX-T and PKS followed by meet and greet with various like minded people and discussing about how they are using VMware product features and functionalities in their environment. Looking forward for our next meet :-)

If you are in Delhi/NCR and associated with VMware or using VMware products this is the event you don't want to miss. Delhi NCR VMUG .In case you are one of those who are not in Delhi/NCR you can also search for local VMUG group in your region to attend VMUG community events VMware User Group

Wednesday 13 February 2019

Back to Basics Part 18 - Application Virtualization with VMware ThinApp

As promised the New- Back to Basics Series is here with some new posts and blogs from other verticals not just Server Virtualization (VMware vSphere ),for example this post is dedicated toward's the VMware’s End User Computing focussing on VMware ThinApp, few other blogs which are currently in draft for the *New back to basics series will be available soon and will be focussing on other products including VMware vSAN, VMware NSX and VMware vSphere.

VMware ThinApp

1 ) ThinApp is an agent less application virtualization solution which helps us accelerate the deployment of application delivery and also eliminate the burden of provisioning, patching and updating applications and images. ThinApp virtualize the application by encapsulating the application files and registry into a singe thin app package which can be deployed independently from the underlying operating system. 

2 ) Each application is packaged and encapsulated into its own container which is separate from any other application. This is done with the help of three important components of ThinApp Package including The ThinApp runtime, The application’s file and registry modifications and The sandbox.

3)  The ThinApp loads the ThinApp runtime as soon as the packed application executable is started by the user, which then encapsulates and virtualizes all the calls that applications makes to the operating system for registry settings, and other services by creating a virtual environment or a thin layer between the application and the operating system.

Image Source -

4) Sandbox contains shadow directories for all native file directories that are affected during application runtime. The Sandbox Directory holds an application's user-configurable settings for example if one of the user want as his home page however the other user may like to create as his home page.

** Sandbox is not a Cache it stores the user settings persistently.

5) The application files and registry modifications are done by the ThinApp packaging process which scans the native file system and registry before and after the installation of the application which we are planning to virtualize. During the pre and post installation phases the snapshot.exe utilitiy scans and records files attributes of each file in the file system and registry keys, it also compares the two snapshots and find out which files and registry key were changed so as they can be tracked in a separate ThinApp project folder.

6) Once snapshot.exe has determined the changed files those files are added to the application project directory, which then becomes the Virtual File System during the ThinApp Build process. When build.bat runs, it starts the vftool.exe tool (Compiles the virtual file system during the build process of the captured application), and builds entries in the FS registry key under HKEY_LOCAL_MACHINE

7) Virtual Registry stores the registry settings used by the applications and ThinApp runtime, after the snapshot.exe has determined which registry keys changed during the application installation and configuration, the keys are recorded in text files in the application project, and there are three main files HKEY_CURRENT_USER.txt , HKEY_LOCAL_MACHINE.txt AND HKEY_USERS.txt.

8) We can control the level of interaction between the application and the native system by using different Isolation Modes 1) Merged isolation mode is the default isolation mode for file system wherein the updates to the files system and registry are merged with native file system. 2) Write Copy File and Registry Isolation mode- The user can read from the native file system and for write operations the user writes to a copy of native file system in the Sandbox. 3) Full File and Registry Isolation Mode means that the user can neither read nor write to the native file system.

9) ThinApp Application package can be deployed in various modes 1) Deployed Execution Mode wherein the package (executable file) is first deployed to end user's system and then accessed from local device, user's execute the copy of ThinApp package like any other natively installed windows application. 2) Streaming Execution Mode - As the name suggests enables the application to be centrally stored and accessed by multiple users wherein the package loads directly into memory and no disk storage is required.

10) VMware ThinApp 5.2.4 is the latest version released in September 2018 with additional support for Windows 10 1709 and Windows 10 1803 and an additional parameter availability in package.ini ReleaseShutdownLocksEarly, Setting this parameter to 1 fixes an issue that could cause a process to hang during shutdown.

Wednesday 6 February 2019

Vembu BDR Suite v4.0 - 10 Features you need to know

Vembu recently announced the latest release of their flagship product Vembu BDR suite v4.0 with some cool new features,so thought of dedicating an article to talk about the various enhancements being made.

Hyper-V Failover Cluster Support - This release of Vembu BDR Suite now support's Backup of the VM's running on Hyper-V Cluster, which means that now the scheduled backup will not be interrupted even if the Virtual Machines are failover from one host to another host.

Handling New Disk Addition - In the older versions of Vembu BDR suite any newly added disk to a Virtual Machine on ESXi and Hyper-V was backed up only during the next full backup. From v4.0, the new disk additions are designed to be detected and considered for back up on the next incremental schedule.

Checksum Based Incremental for Hyper -V is leveraged ensuring seamless backups when the default CBT technology fails.

Storage Utilization Report can now be generated with details like the size of VM before the backup schedule, the actual size of the backup data in the storage repositorycompression , storage reduction rate.

Live Recovery to VMware ESXi and Hyper-v has now been enhanced where in we can configure the hardware specifications of the target hosts including socket and core counts, memory and hard disk provision type when performing permanent recovery.

All new credential Manager helps to manage the credentials of the VM hosts, guest Virtual Machines and physical computers avoiding the need of entering the credentials every time.

Backup of Hyper-V Virtual Machines' was initially limited to SMB storage and now with Vembu BDR v4.0 it has been enhanced to Shared VHDX.

Active replication Job's can now be aborted from within the Vembu offsite DR server console  to terminate an ongoing replication from Vembu onsite to offsite server.

We can now activate or deactivate Vembu BDR Servers from Vembu OffsiteDR if we wish to stop the backup replication to the Disaster Recovery Site.

Quick VM Recovery Report provides us with an insight into virtual machine recovery job with details like the RTO, start time and end time and also help us with the some other filters including the name of the VM being triggered for recovery and the target server.


Wednesday 30 January 2019

NVIDIA Quadro Now Available on Microsoft Azure Cloud

NVIDIA recently announced that Quadro Virtual Workstation vWS is now available on Microsoft Azure Marketplace helping engineers to achieve high performance simulation, rendering and design with NVIDIA Virtual Machine Image (VMI) and Quadro vWS software pre-installed enterprise customers can spin up Microsoft Azure VM in minutes.
To Access NVIDIA Quadro on Microsoft Azure login to Microsoft Azure Marketplace and search for Nvidia and then select from NVIDIA Quadro Virtual Workstation - WinServer 2016 and NVIDIA Quadro Virtual Workstation - Ubuntu 18.04 starting at US$ 0.20/hour.

After selecting the NVIDIA Quadro Workstation running on GRID Software 7.1 click create  and specify details including the name of the VM, region you want to provision it in. While selecting the image we need to choose one of the supported Microsoft Azure VM Sizes where each VM is configured with specific number of GPU in pass through mode.

** The NCv3-series focus on high-performance computing workloads (reservoir modelling,DNA Sequencing) powered by NVIDIA’s Tesla V100 GPU and the ND-series is focused on training and inference scenarios for deep learning which uses the NVIDIA Tesla P40 GPU.

NCv3 Series Where 1 GPU = 1 V100 Card

NCv2 Series Where 1 GPU = 1 P100 Card 

ND Series Where 1 GPU = 1 P40 Card

After the name, region , image and size is selected specify the administrator account details wherein I used quadro as the username and the password and then configure other options like disks, Networking, Management , Guest Config and Tags's like any other Azure VM.

Once we have configured all the required option for our NVIDIA Quadro Virtual Workstation  we can review and spin the VM. And connect to it after downloading the RDP file or directly. Below is the example I picked from NVIDIA wherein they are running with Ansys Discovery live on their cloud-based virtual workstation.

Image Source - NVIDIA

Final Thoughts - With NVIDIA Quadro Virtual Workstation now available on Azure Cloud many enterprises can quickly deploy GPU accelerated Quadro Workstations and the design engineers can use the power of Quadro from any location without managing the back-end infrastructure.