Pages

Tuesday, 25 April 2017

Back to Basics - Part 13 vCenter Server High Availability

In our last blog posts related to Back to Basics Series we discussed about Virtual Machine Files (Part1), Standard Switches (Part2), vCenter Server (Part 3),Templates (Part4) vApp (Part 5), Migration (Part 6),Cloning (Part 7), Host Profiles (Part 8), Virtual Volumes AKA VVOL's (Part 9) Fault Tolerance (Part10) ,Distributed Switches (Part 11) and Distributed Resource Scheduler Part 12 we also discussed about the various tasks related to building Home Lab Part1Part 2Part 3,Part 4 Part 5.

Dedicated this article to talk about an important feature that has been added in VMware vSphere 6.5 wherein the vCenter Server provides us the option to achieve high availability.



Image Source- VMware

vCenter Server high availability provides us protection against host and hardware failures and helps us to reduce the overall downtime required when patching the vCenter Server appliance which can be done using the <codeph>software-packages</codeph> utility available in the vCenter Server Appliance shell.

Before proceeding with the deployment of vCenter Server in High availability mode we need to ensure we are configuring the network well in advance for which we need to add a port group to each ESXi host, and add a virtual NIC to the vCenter Server Appliance.

When achieving the High Availability for vCenter Server each of the nodes plays an important part where in the active node runs the active vCenter Server appliance instance and communicates with other nodes over a dedicated HA network, passive node acts as a cloned copy of active node and becomes active in case of any failures detected for active node, the last node on our list is witness node which helps us to avoid any split brain scenarios.

During the configuration of High availability we get two options i.e Basic in which the vCenter HA wizard automatically creates and configures a second network adapter on the vCenter Server Appliance, clones the Active node, and configures the vCenter HA network, the next option which is advance option gives us more flexibility over basic option wherein we are responsible for adding a second NIC to the vCenter Server Appliance, cloning the Active node to the Passive and Witness nodes, and configuring the clones.


Sunday, 16 April 2017

Active Directory Integration - Nakivo Backup and Replication v7

We have already dedicated couple of articles related to Nakivo Backup and Replication v6.1 wherein we have seen the architectural components and also talked about new features available in Nakivo Backup and Replication v6.1 here is a link for your quick reference Demystifying Nakivo Backup and Replication v6.1

We also discussed about Backup/Recovery of Active Directory Objects with Nakivo Backup and Replication v6.1 in case you missed it here is the link for your quick reference Backup/Recover Active Directory Objects with Nakivo

Apart from testing the backup and recovery related functionalities using Nakivo Backup and Replication v6.1 we also had a detailed discussion on Replicating Virtual Machines here is the link for your reference Replicate VM's with Nakivo Backup & Replication

We also talked about working with Nakivo Backup and replication appliance and what's new withNakivo Backup and replication 6.2 which was announced by NAKIVO on october 13th 2016 which help us by providing backupreplication, and recovery of paid EC2 instances sold through AWS Marketplace we also discussed about What's New - Nakivo Backup and Replication v7 Part 1 wherein we have seen some highlights about Hyper-v support which has been added for Hyper-v 2016 and 2012 with Nakivo v7.

In Our last article related to Nakivo Backup and Replication series we discussed about 

It's time to test out one of the feature that we discussed in our last posts, the feature which I will be talking about is Active Directory Integration this is one of the important feature we look forward too,when it comes to production environment, with NAKIVO Backup & Replication we can easily integrate Microsoft Active Directory and map Active Directory groups to NAKIVO Backup & Replication user roles, which will allow domain users to log in to NAKVIO Backup & Replication using their domain credentials.
When it comes to integration the steps are pretty straightforward wherein I created a test user Backup Admin in Active Directory and added this user to a group Backupusers.
Login to Nakivo backup and Replication console as an administrator and look for the options of User Accounts and edit the properties of the same.
The AD integration option is disabled by default we can enable the option by specifying domain name and the name of the group that belongs to our active directory and click test integration.
Test Integration option will ask to specify the details including domain user login and the credentials, once the test is successful we can log out as an administrator and login as the AD user.

Try Nakivo Backup and Replication v7 

Tuesday, 4 April 2017

Cannot run upgrade scripts on ESXi Host

Recently was given some tasks to upgrade few ESXi boxes (UCS-B200-M3) running with ESXi 5.0 update 3, to ESXi 5.5 Update 2 . As part of the process migrated some Monster Virtual Machines to neighbor ESXi boxes and put the host in the maintenance mode which made DRS to kick in and migrate remaining Virtual Machines to other host in the cluster.

Second thing on the list was to create a baseline and attached the baseline to the ESXi host, and proceed with remediation, initially it looked as if the host is being remediated properly until it was 92 percent completed and I saw the status of the host was still showing as ESXi5.0 Update 2, so thought of checking the tasks and events and was able to see something error. “Cannot run upgrade scripts on the Host”.

As part of initial troubleshooting I searched for KB article 2007163 which talks about the same issue so started following the steps in Kb article to locate the log files and any error in the log files, as suggested in the Kb article.

After connecting to my ESXi box using putty was not able to find any entries similar to below in the /var/log/vua.log file, :

OSError: [Errno 39] Directory not empty: /bootbank/state.XXXXXXX (where XXXXXXX is a number)

However was able to see some strange error messages “This Application is Not using Quick Exit ()” The Exit code will be set to 0.@ bora/vim/lib/vmacore/main/service.cpp

So started search for some more references to troubleshoot this issue and was able to find some references which talks about the FDM agent uninstallation and then remediating ESXi hosts.

Followed below steps to further troubleshoot this issue by uninstalling the FDM agent, as suggested in KB article 2012323 and then rebooting the ESXi host, Remediating and Exit the maintenance mode.

1 # cp /opt/vmware/uninstallers/VMware-fdm-uninstall.sh /tmp
2 # chmod +x /tmp/VMware-fdm-uninstall.sh
3 # /tmp/VMware-fdm-uninstall.sh

** Configuration shown above worked in my environment kindly raise a support request if you are also facing similar issue and unable to resolve with the help of KB articles

Saturday, 1 April 2017

VMware Horizon 7.1 + NVIDIA GRID vGPU = Awesomeness

I Always feel honoured and privilege being part of communities like (VMware vExperts and NVIDIA GRID COMMUNITY ADVISOR ) where I get chance to hear it from industries great speakerstechnologist, who all bring in their great experience and knowledge sharing capabilities and helping community to grow in terms of technical knowledge.

Thanks to all the community members, NVIDIA, and VMware for giving me a chance to be part of this awesome communities, Which helps me to write contents on my blog and wide spreading knowledge to the global Virtualization community.

I have dedicated couple of article which talks about features available with VMware Horizon like Virtual Printing and also discussed about the some of the protocol used like PCOIP which is Teradici proprietary UDP based protocol.

Apart from the above articles we have also seen lot of articles talking about NVIDIA GRID vGPU like 10 things we need to know about NVIDIA GRID vGPU where we discussed about various software editions like NVIDIA GRID Virtual Applications (For Organization deploying Xen App and other RDSH Solutions) , NVIDIA GRID Virtual PC (For users who want Virtual Desktops with Great user Experience) and NVIDIA GRID Virtual Workstation (For users who want  to use remote professional graphics applications with full performance on any device, anywhere.

Also talked about Grid Virtual PC, Virtual Workstation and Virtual Apps are available on a per concurrent user (CCU) model and CCU license is required for each user irrespective of the fact an active session exists or not to the Virtual Desktop.

In our last post related to NVIDIA GRID vGPU Series i.e Climb the GPU ladder with NVIDIA GRID vGPU we talked about the architecture of NVIDIA GRID vGPU and seen how GRID vGPU are assigned to Virtual Machines similar to physical GPU because each vGPU has a fixed amount of Frame Buffer (Portion of RAM containing a bitmap that is used to refresh a video display from a memory buffer) which is allocated at the time when vGPU's are created

Dedicated this article to talk about some new few features which has been made available with VMware Horizon 7.1 released by VMware on 16th March 2017 in context with NVIDIA GRID vGPU.But before we being to talk about features that has been introduced, would like to focus on three important concepts of cloning which are available to us and how one differs from the other.

Full clone is a complete independent copy of a Virtual Machine and shares nothing with the Master VM, i.e every-time we create a full clone it's would be operated separately from the Master VM from which it has been created.

Linked clones also known as composer linked clones helps us to create VM's with less storage space consumption because the accessing of software is been taken care with the help of shared Virtual disks, to create a composer linked clone we create a Master VM and take the snapshot of this Master VM, further cloning process will create a Replica VM (Full Clone) which will be sharing the disks with the Linked Clones.


Image Source - VMware


While creating automated Desktop pools View Composer uses Parent/Master Virtual machine often called as Base Image for the creation of new linked clone which has their own OS disks which is a snapshot delta disks for the operating system and could also have optional disks i.e Persistent Disks (Helps users to preserve data in case of shutdown/logout) and Disposable Disks (Holds Temp Folder and Paging File and is deleted when the Linked Clone is Powered Off).

Instant clones also shares the Virtual disks with Replica Virtual Machine however the process of creation of Instant Clones differs from Linked Clones, at the time of cloning a Parent VM is created from the Replica Virtual Machine and at the time of creation of Instant Clone they share the memory of running Virtual Machine from which they are created.



When an instant clone is created from a running parent Virtual Machine, any reads of unchanged information come from the already existing running parent VM. However, any changes made to the instant clone are written to a delta disk, not to the running parent VM. 
Image Source - VMware

Now that we are aware about the various cloning options available to us with VMware horizon we can proceed further and discuss about What's New with VMware Horizon 7.1.

Instant Clones support of vGPU allows us to provision vGPU desktops using Instant Clones, before creating pool we need to ensure we have created a Master VM and then we can proceed further with adding NVIDIA GRID vGPU Device and then we need to select the GPU Configuration for our Master VM, we can select the profile required which is as of now limited to one profile per cluster and is compatible with NVIDIA GRID M Series.

With this VMware Horizon 7.1 release we no longer need to choose between either having high-end, hardware-accelerated graphics support or Instant Clones. We can now enjoy the best of both worlds: the ability to create a pool of Instant Clones which is backed by NVIDIA GRID vGPU.

Being an administrator now we have the power of delivering 100 of desktops to end user's using instant clones with NVIDIA GRID vGPU support, this is definitely going to make difference from management and administration perspective. 

Based on the recent Scale Testing of Instant Clones with vGPU conducted by VMware + NVIDIA by testing 500 VM running with below configuration, it only 44 minutes to create the pool of fully customized and powered on Instant Clones. That is 4.8 seconds per clone on average
  • Horizon 7.1, Horizon Client 4.4, vSphere / ESXi 6.5 EP1
  • 5 HP Proliant DL 380 servers with Xeon CPU with a total of 96 cores across the servers, 2 M10 GRID vGPU cards per server
  • Master VM: 2 vCPU, 2GB RAM, M10-0B profile, Win 7 SP1 x64b
We can also perform maintenance on instant-clone Virtual Machines by using vSphere Web Client and putting the ESXi hosts into maintenance mode which will automatically deletes the parent VMs from that ESXi host.

Another important feature that I would like to highlight is VMware Blast extreme which is included in VMware Horizon 7 which uses the Transmission Control Protocol (TCP) by default, but can also use the User Datagram Protocol (UDP).


VMware Blast Extreme in VMware Horizon 7 works best with NVIDIA GRID  to offload the encode/decode process off the CPU to dedicated H.264 engines on NVIDIA GPUs to provide us a great user experience by reducing the overall latency and bandwidth