Tuesday 30 August 2016

VMworld 2016 Day 2 Press Release


Today at VMworld 2016, VMware, Inc. (NYSEVMW) introduced VMware Integrated OpenStack 3, (VMware Integrated OpenStack Gives You Control of Cloud Resources Through OpenStack API) Is the latest release of VMware's OpenStack distribution now based on the OpenStack Mitaka release. (OpenStack Mitaka, the 13th release of the most widely deployed open source software for building clouds, now offers greater manageability and scalability as well as an enhanced end-user experience.)

VMware is introducing new features to make deploying OpenStack clouds simpler and more cost-effective as well as allow customers to use existing VMware vSphere workloads in an API-driven OpenStack cloud.



VMware Extends Capabilities of vSphere Integrated Containers to Help Enterprises Embrace Digital Business

Today at VMworld 2016, VMware, Inc. (NYSEVMW) unveiled two new capabilities of VMware vSphere Integrated Containers, which enables IT operations teams to provide a Docker compatible interface to their app teams, running on their existing vSphere infrastructure. 

New container registry and management console features round out VMware vSphere Integrated Containers to further help IT teams operate containers in production with confidence. 

VMware Delivers Innovations to the Digital Workspace With Unified Endpoint Management, Windows 10 Support and Enhanced Identity Management

Today at VMworld 2016, VMware, Inc. (NYSEVMW) announced a new unified endpoint management approach for managing Windows 10, along with advancements to VMware Horizon and VMware Workspace ONE. 

New Unified Endpoint Management Technology Combines Enterprise Mobility Management With Traditional PC Lifecycle Management for Comprehensive Windows 10 Support.

Enhancements to VMware Horizon Include New Plug-And-Play Solutions and Accelerated Performance From Blast to Help Drive Down Cost and Complexity; New Automation Capability in VMware Workspace ONE With VMware Identity Manager Further Streamlines Management of Office 365

These innovations help advance the digital workspace embraced by the industry and solve the challenge of supporting an increasingly mobile workforce that demands anytime, anywhere access to all applications from any device.

VMworld 2016 General Session - Monday

VMworld 2016 General Session - Monday- 29th August 2016


Overview VMware NSX Distributed Firewall

We have already seen couple of articles related to VMware NSX wherein we discuss about VMware NSX Overview and  An Insight to VMware NSX Layers.

Dedicated this article for a quick overview VMware NSX Distributed Firewall which is one of the feature available with VMware NSX.


NSX Distributed Firewall is a Kernel Embedded Firewall providing the control for our Virtualized Networks and the Workloads.

With Distributed Firewall we can enforce rules at vNiC level yes we are talking about the Virtual Nic Card of a Virtual Machine.

Distributed Firewall is running inside the ESXi host as Kernel Space Module which means the more the number the of ESXi the better would be the Overall Capacity of NSX Distributed Firewall.

Distributed Firewall enforces security policies based on VMware vCenter objects like datacenters and clusters, virtual machine names and tags, network constructs such as IP/VLAN/VXLAN addresses irrespective of where the Virtual Machine resides and how they are connected.

When it Comes to applying the Firewall Rules NSX Manager is used which then pushes all the Firewall Rules to the underlying ESXi hosts. 

What all Firewall Rules can be applied ? We can Create Network Based Rules IPv4,IPV6 and MAC Addresses, Virtualization and Application Aware Rules.

Monday 29 August 2016

VMware ESXi RollBack Home Lab Test Results

While working on my Home Lab I tested one of the important feature available in ESXi's i.e Host Networking RollBack as the name states RollBack any changes made to the Configuration of the ESXi host will automatically trigger Host Networking Rollback to a previous Valid Configuration.

So what all Configuration changes can trigger this Automatic RollBack for Our ESXi host ?

- Updating DNS and Routing Settings

- Updating the Speed or Duplex of a Physical NIC

- Changing the IP Settings of a Management VMkernel Network Adapter.

- Updating the Teaming and Failover Policies to a Port Group that contains the Management VMkernel Network Adapter.

I tried with the third one wherein i change the IP settings for Management VMkernel Network Adapter.

An automatic roll back occurred which made ESXi host to be restored to the previous Valid Configuration.


*Note: Installation/Configurations/Specifications methods used here has been Tested in My Home Lab Nested Environment .

Thursday 25 August 2016

Kick Start Your Journey Towards VCAP6-DCV Design

Special Thanks to my Travel Team for making such Travel arrangements which allowed me to borrow some time out of my schedule.

Most of us who are already VCP 6 - DCV Certified share a common goal of achieving the next level of certification with VMware "The Advance Certification" and it really take a lot of time / Hard Work and Dedication to achieve these Credentials.


Thought of starting this VMware Certified Advance Professional 6 - Data Center Virtualization Design Blog Series  because in most of the Training's i Deliver for VMware Education Services I get a same question from my Audience.


How to Achieve "VMware Design Certification" Well here is the Answer



Where to Start ?

The First and foremost thing to start with is to achieve your VMware Certified Professional 6 - Data Center Virtualization Credentials ? 

I Know It's a Question again for some of us on how to achieve VCP6 -DCV these credentials? 


So this is what you need to do ! Attend either one of these Mandatory Training Courses from VMware Education Services or from any VMware Authorised Training Center (VATC) before you appear for your VCP6 -DCV exam.


vSphere: Install, Configure, Manage [V6]
vSphere: Install, Configure, Manage [V6] On Demand
vSphere: Optimize & Scale [V6]
vSphere: Optimize and Scale [V6] On Demand 
VMware vSphere: Bootcamp [V6]
VMware vSphere: Fast Track [V6]
VMware vSphere: Design and Deploy Fast Track [V6]
VMware vSphere: Troubleshooting [V6]
VMware vSphere: Install, Configure and Manage plus Optimize and Scale Fast Track [V6.0]
VMware vSphere: Optimize and Scale plus Troubleshooting Fast Track [V6.0] 


All the above mentioned courses are from same track VMware vSphere which doesn't mean you cannot achieve your VCP 6 and VCAP6 credentials in any other Track.

I mentioned only about the VMware vSphere courses here because in this blog post series we will be focussing on VCAP 6- DCV Design Track for all other tracks i would recommend you to follow below links. 

Once you have completed the above training course you can schedule your VCP 6 - DCV exam and my heartiest Congratulations for those who already are VCP's and wishing luck to all the aspiring VCP's.

What's Next ?


Now we are all set to proceed further with VCAP 6 - DCV Design Exam ! What about the Training Prerequisites ! Do we need to complete any Mandatory before appearing for VCAP 6- DCV Design exam ?


Well there is no as such Mandatory training however to align yourself more towards the goal of achieving your VCAP 6 DCV Design credentials, VMware Recommend attending the VMware vSphere: Design and Deploy Fast Track [V6] which talks about the Exam objectives for VCAP6 -DCV Design and gives the audience exposure to Design Scenarios where in they can Create their on Design and can also have a Healthy Discussion with their Instructors about the Design.


To get an Overview about few important aspect related to VMware vSphere Design here is the link which talks about the various stages of a Design Process and also discuss  about Design Qualifiers we need to consider for our Design
VMware vSphere Design Qualifiers

Exam Blueprint 

1) Create a vSphere Conceptual Design-

Objective 1.1- Gather and Analyze Business Requirements.

Objective 1.2 - Gather and Analyze Application Requirements.

Objective 1.3 - Determine Risks, Requirements, Constraints and Assumptions.

2) Create a vSphere 6.x Logical Design from Existing Conceptual Design.


Objective 2.1-  Map Business Requirements to a vSphere 6.x Logical Design

Objective 2.2 – Map Service Dependencies

Objective 2.3 – Build Availability Requirements into a vSphere 6.x Logical Design

Objective 2.4 – Build Manageability Requirements into a vSphere 6.x Logical Design

Objective 2.5 – Build Performance Requirements into a vSphere 6.x Logical Design

Objective 2.6 – Build Recoverability Requirements into a vSphere 6.x Logical Design

Objective 2.7 – Build Security Requirements into a vSphere 6.x Logical Design


3) Create a vSphere 6.x Physical Design from an Existing Logical Design

Objective 3.1 – Transition from a Logical Design to a vSphere 6.x Physical Design

Objective 3.2 – Create a vSphere 6.x Physical Network Design from an Existing Logical Design

Objective 3.3 – Create a vSphere 6.x Physical Storage Design from an Existing Logical Design

Objective 3.4 – Determine Appropriate Compute Resources for a vSphere 6.x Physical Design

Objective 3.5 – Determine Virtual Machine Configuration for a vSphere 6.x Physical Design

Objective 3.6 – Determine Data Center Management Options for a vSphere 6.x Physical Design

Key Note - It's a Journey dedicated towards achieving these objectives and "VCAP 6 - DCV Design Credentials", I will be writing a series of Blog Post on how to work towards each Objective wherein will be discussing Key Aspects Related to VMware vSphere Design and would also be sharing few useful link's which will be acting as an Helping Hand for us to reach towards our Goal.

Wednesday 24 August 2016

My New VMware Home Lab is Spinning

With my New home lab in place it was time for me to proceed further with the Initial Setup and Configuration by spinning up some Virtual Machines.

It took me 6 days to built my initial Home Lab setup not because it was quite big but because of my very busy schedule was packed with another VMware Education Training Delivery this week but I was able to manage some time out of my busy schedule and ended up with this initial setup. 

So what do we got ?

Before we proceed further it's important to understand what do we got in terms of Physical Hardware, if in case you might have missed the initial post i shared about my All New Home Lab here is a quick Reference In Love With My New Home Lab

Now when we have seen the hardware configuration let me inform you it is a High END Physical Desktop not a Physical Server yes It's definitely not a Physical Server and why is that?

Well my pocket doesn't allow me to spend on a Physical Server and that to with a Basic  Configuration when you can get a High End Desktop within the same Budget.

If you are one of those Sponsor's waiting for loaning Sever Hardware you are more than Welcome! :-)

To start with i created Domain Controller Virtual Machine Running with Windows Server 2008 R2, I have dedicated one of the article in my Home Lab Post Series where in I discussed about the Installation and Configuration of this Domain Controller Virtual Machine you can refer the same here Back to Basics- Home Lab Part 2


Post running the DCPromo Command and after successful installation of DNS my new Domain Name was something which was quite obvious HomeLab.Local

For First Machine i used 172.20.x.x Static Ip address and the same 172.20.x.x as Default Gateway.

*Note: Installation/Configurations/Specifications methods used here has been Tested in My Home Lab Nested Environment .

Now Post this all the Windows Virtual Machine I created made them joined to the same domain HomeLab.Local  to ensure there are not any Errors related to FQDN.

However there is an extra step required for all other Non- Windows Machine in your DNS machine that you need to do add a proper entry in the Forward Look Up Zone and also you need to create a New Record in DNS

My Second, Third,Fourth and Fifth Machine Which i Deployed was ESXi 6 Server each one running with 2 vCPU and 6 GB Memory, Believe me Installation of ESXi is the most simplest Installation you would have ever seen have a look here Back To Basics -Home Lab Part 3 

The only difference in the above link and current one is that i used ESXi 5.5 in my last lab  however now i have used ESXi 6.0.

As i mentioned ESXi installation is easy however the Configuration can be tricky sometimes but Thanks to DCUI which makes the configuration a lot more easy.

All the four Virtual machines running with ESXi 6 I used the Static IPv4 addresses 172.20.x.x,172.20.x.x,172.20.x.x,172.20.x.x and also configured the DNS settings by specifying the DNS IP's address.

It's Time to Provision vCenter Server for which i created another Windows Server 2008R2 Virtual Machine and used Windows Based Installation for vCenter Server6 running with an Embedded PSC and Embedded Database, with 2 vCpu and 8GB RAM.


One of another Static IP addresses from 172.20.x.x range I used for vCenter Server and made this Windows Virtual Machine to be part of the same Domain HomeLab.Local. 

But this time when i had Beautiful Hardware Configuration which made me Fall In Love with her, I can't stop myself by not creating another Windows Based vCenter Server 6 running with the same configuration and joined to the same domain running in a Linked Mode.

Well It's time to tell the Truth I Installed an Extra vCenter Server because of the Future Integration and Testing i planned about ! Yes that's right so lot more is coming up with my New Home Lab.

When it come to creating a Virtual Machine for Shared Storage i prefer using OpenFiler again i refer it because the last time i Deployed my Home Lab i used the same here is the link for the configuration of openfiler  Back to Basics - Home Lab Part 4. 

Don't forget to configure the settings related to DNS inside the Openfiler and also make sure we created a proper record in DNS to avoid any further configuration related errors.

All my Shared storage requirements can be satisfied by Openfiler however i also have created NFS share running on one of my Windows Server 2008 R2 Virtual Machine and mounted it to my ESXi hosts. which i will be using as one of the shared repository for ISO Images.



Datastore 1 and 2 are the local datastore for my ESXi's and Shared datastore is created using the LUN's created using openfiler.

Hope this article help's you, do let me know if you are looking for some other functionalities /integration to be tested out you can reach me @kanishksethi on twitter.

I am Really Excited to Test and Integrate more Features and Products with my Home Lab environment and will keep this space updated with all the latest findings.

Monday 22 August 2016

Demystifying Nakivo Backup and Replication v6.1

Dedicated this article to understand the Architecture & Components of Nakivo Backup and Replication v 6.1.

Nakivo Backup and Replication is one stop shop for all our Backup and Replication requirements irrespective of the fact whether we want to Recover the VM's onsite, offsite or to Cloud.

Architecture & Components

1) Director - Centralized Management Interface responsible for Managing our Inventory of Virtual Infrastructure. Well this is the Place Where we will be creating the Back Up Jobs, Managing them and also managing our backup repositories.



We can consider Director as vCenter Server Running with Web client or in another words we are accessing vCenter Server using Web Client wherein the Web Client is giving us Graphical User Interface and vCenter Server is Giving us Centralized Management Capabilities. Again this analogy i used is to explain the Architectural behaviour of Director component of Nakivo Backup and Replication v6.1.

So how many Directors should be Deployed ? One instance of Director would be enough to manage our Geographically apart vCenter Servers, Standalone ESXi's and also the Backup Repositories.

2) Transporter -  As the name suggests Transporter the "One Who Transports" is the component which is responsible for performing Replication, Backup, Data Recovery, Compression, Encryption.

By default when we proceed with the installation an Instance of transporter is Installed along with Director called as On Board Transporter.

How many transporters should be Deployed

Single Transporter can simultaneously process VM disks during Backup ,Replication and Recovery which is set to 6 by default again it is a configurable parameter and can be changed.

What if the Jobs contain More Virtual Machine Disks than the Transporter is set to Process Simultaneously, in this scenario the Disks would be Put up in Queue and would be processed by the Transporter once it is free to handle those Disks.

Our Initial Question about How Many Transporters Should be Deployed Still remains Unanswered, Well the Answer depends on how large is the environment? Do we need Multiple VM's to be processed Simultaneously if Yes then we can go with more than one Transporter to distribute the workload among them.

Do we really find a business use case in our environment wherein we need more than one Transporter Yes Maybe If we have two sites over WAN and the VM's are getting replicated between these sites.

In this Example we can go ahead and deploy more than one transporter so as the Transporter at source and destination site can help me encrypt/decrypt the data.

Otherwise one Transporter would be more than enough to be deployed across one site and would be taking care of all our requirement related to Backup, Replication.

3) Backup Repository

Another important component when working with Nakivo Backup and Replication v 6.1 is our Backup Repository (Folder Used for Storing Virtual Machine Backups).

During the initial Setup when adding the Backup Repository a folder get's created by the name of Nakivo Backup based on the location you have specified.

When  Specifying the folder for our Backup Repository we can choose CIFS Share or Local Folder of the Machine where Transporter is Installed choosing this option helps us to choose from any storage types (SAN,NAS).


When it comes to the Management of these Repository each of the Repository is Managed by an Assigned Transporter responsible for performing Read and Write on that Repository.

Virtual Machines Backups are Compressed and Deduplicated at the Block Level giving us an overall Storage Space upto 128 TB.

Now when we are aware about the Architectural Components it's time for us to proceed further and talk about some New Features in Nakivo Backup and Replication v 6.1.

  • Object recovery for Microsoft Exchange 2016 and 2013: Nakivo Backup and Replication v 6.1 enables us to browse, search, and recover Microsoft Exchange 2016 and 2013 objects which could be our emails directly from compressed and deduplicated backups without restoring the Entire VM in the First Place. This is one of the Feature which is my personal favourite because when it comes to Exchange Server i know how critical is could be to find the right email when you really don't want to spend time in recovering the Virtual Machine itself.
  • Job Chaining: Another New Feature in Nakivo Backup and Replication v 6.1 helps us by extending it's job scheduling capabilities which ensure Backup Job can trigger another Job Post t's completion.


Would be dedicating another article where will be talking about the installation and configuration, giving you all heads up well in Advance to Download Free Trial Nakivo Backup and Replication v 6.1 which is available as Windows Installer, Linux Installer, Virtual Appliance,Installer for NAS and Amazon Machine Image for AWS EC2.

Friday 19 August 2016

Nakivo - My First Blog Sponsor

I would like to take this opportunity to Thanks and Welcome Nakivo as my First blog sponsor. It's both a Proud and Happy Moment for me so dedicating this article to say Thanks and to understand a little history about Nakivo as a Company and also be doing quick glance on Nakivo Back Up and Replication.

About Nakivo
Headquartered in Silicon Valley, NAKIVO Inc. is a privately-held software company that develops and markets a line of next generation data protection products for VMware VM backup, replication, and recovery.
NAKIVO has been profitable since founding in 2012, has been named an "Emerging Vendor 2013" by CRN, and reported a revenue growth of 800% in 2013. This has put NAKIVO on top of the list of the fastest-growing data protection companies in 2013, outpacing all of its competitors.
As an Elite member of the VMware Technology Alliance Partner program, NAKIVO has close working relationships with VMware and is further aligned with VMware to promote the use of VM backup for virtualized server environments.
  • When it comes to the deployment of Nakivo Backup & Replication it is available for both VMware and AWS EC2 environment.
  • Installation is just a matter of Clicks away as it is available as preconfigured virtual appliance, Windows based installation, Linux Based installation and Preconfigured Image for our AWS EC2 Environment.
  • Backing Virtual Machine was never so easy with a simple to use Graphical User Interface we can back up our Virtual Machines by creating Schedule Jobs or on Demand.
  • Replication can be taken care of a live running Virtual Machine by maintaining 30 recovery points for our Virtual Machines.
  • With Nakivo Backup and Recovery v6.1 we can make use of Direct SAN access transport mode for backing up Virtual Machines and Replication if the Virtual Machines are residing on SAN Storage. 
I am currently busy doing my initial New Home Lab Setup post that i will be Integrating Nakivo Backup and Replication in my Environment and will test it's functionalities and will keep the blog post updated with more post.

Thursday 18 August 2016

In Love With My New Home Lab

It's Been so long that i have written an article on Home Lab, reason being i lost the access to one of the Physical Server on which i created my entire Home Lab.

And since that day i was struggling to test lot of new functionalities and was not able to proceed further with the integration Stuff. In Short i was missing my Old Setup a lot but there is an OLD Saying "Everything Happens For a Reason". 

And i think i understand the reason today when i was unpacking all the new components i Bought for my New Home lab. Oh Yes !. Super Excited.

Yes we are back again with Home Lab Series where in i will be sharing my experience that i will be testing in My New Home Lab. So Let's Start with ?

Unwrapping the Components

1) Intel Core i7-5820K Processor - 




Cache15 MB SmartCache
Bus Speed0 GT/s
Instruction Set64-bit
Instruction Set ExtensionsSSE4.2, AVX 2.0, AES

No of Cores6
No of Threads12


Intel Turbo Boost Technology 2.0
Intel vPro Technology No
Intel Hyper-Threading Technology Yes
IntelVirtualization Technology (VT-x) Yes
Intel Virtualization Technology for Directed I/O (VT-d) Yes
Intel VT-x with Extended Page Tables (EPT)Yes
Intel 64Yes
Idle StatesYes
Enhanced Intel SpeedStep® TechnologyYes
Intel® Demand Based SwitchingNo
Thermal Monitoring TechnologiesYes
Intel® Identity Protection Technology Yes
Intel® Smart Response TechnologyYes
Execute Disable Bit                                                                     Yes
2) Intel Thermal Solution TS13A-  Processor Coller.




3) Corsair VENGEANCE LPX DDR4  8 GB 2400 MHZ * 8 = Total 64 GB RAM



Corsair Vengeance LPX memory is designed for high-performance overclocking. The heatspreader is made of pure aluminum for faster heat dissipation, and the eight-layer PCB helps manage heat and provides superior overclocking headroom. 

4) GigaByte X990SLI Mother Board - 




CPU
  1. Support for Intel® Core™ i7 processors in the LGA2011-3 package
  2. L3 cache varies with CPU
(Please refer "CPU Support List" for more information.)
Chipset
  1. Intel® X99 Express Chipset
Memory
  1. 8 x DDR4 DIMM sockets supporting up to 128 GB of system memory
    * Due to a Windows 32-bit operating system limitation, when more than 4 GB of physical memory is installed, the actual memory size displayed will be less than the size of the physical memory installed.
  2. 4 channel memory architecture
  3. Support for DDR4 3400(O.C.) / 3333(O.C.) / 3200(O.C.) / 3000(O.C.) / 2800(O.C.) / 2666(O.C.) / 2400(O.C.) / 2133 MHz memory modules
  4. Support for non-ECC memory modules
  5. Support for Extreme Memory Profile (XMP) memory modules
  6. Support for RDIMM 1Rx8/2Rx8/1Rx4/2Rx4 memory modules (operate in non-ECC model
Audio
  1. Realtek® ALC1150 codec
  2. High Definition Audio
  3. 2/4/5.1/7.1-channel
  4. Support for S/PDIF Out
LAN
  1. Intel® GbE LAN chips (10/100/1000 Mbit)
Expansion Slots
    1. 2 x PCI Express x16 slots, running at x16 (PCIE_1, PCIE_2)
      * For optimum performance, if only one PCI Express graphics card is to be installed, be sure to install it in the PCIE_1 slot; if you are installing two PCI Express graphics cards, it is recommended that you install them in the PCIE_1 and PCIE_2 slots.

    1. 2 x PCI Express x16 slots, running at x8 (PCIE_3, PCIE_4)
      * The PCIE_4 slot shares bandwidth with the PCIE_1 slot. When the PCIE_4 slot is populated, the PCIE_1 slot will operate at up to x8 mode.
      * When an i7-5820K CPU is installed, the PCIE_2 slot operates at up to x8 mode and the PCIE_3 operates at up to x4 mode.
      (All PCI Express x16 slots conform to PCI Express 3.0 standard.)
    1. 3 x PCI Express x1 slots
      (The PCI Express x1 slots conform to PCI Express 2.0 standard.)
  1. 1 x M.2 Socket 1 connector for the wireless communication module (M2_WIFI)
Multi-Graphics Technology
  1. Support for NVIDIA® Quad-GPU SLI™ and 4-Way/3-Way/2-Way NVIDIA®SLI™ technologies
  2. Support for AMD Quad-GPU CrossFireX™ and 4-Way/3-Way/2-Way AMD CrossFire™ technologies
* The 4-Way NVIDIA® SLI™ configuration is not supported when an i7-5820K CPU is installed. To set up a 3-Way SLI configuration, refer to "1-6 Setting up AMD CrossFire™/NVIDIA® SLI™ Configuration."
Storage InterfaceChipset:
  1. 1 x M.2 PCIe connector
    (Socket 3, M key, type 2260/2280 SATA & PCIe x2/x1 SSD support)
  2. 1 x SATA Express connector
  3. 6 x SATA 6Gb/s connectors (SATA3 0~5)
  4. Support for RAID 0, RAID 1, RAID 5, and RAID 10
    * Only AHCI mode is supported when an M.2 PCIe SSD or a SATA Express device is installed.
    (M2_10G, SATA Express, and SATA3 4/5 connectors can only be used one at a time. The SATA3 4/5 connectors will become unavailable when an M.2 SSD is installed in the M2_10G connector.)
Chipset:
  1. 4 x SATA 6Gb/s connectors (sSATA3 0~3), supporting IDE and AHCI modes only
  2. (An operating system installed on the SATA3 0~5 connectors cannot be used on the sSATA3 0~3 connectors.)
USBChipset:
  1. 4 x USB 3.0/2.0 ports (available through the internal USB headers)
  2. 8 x USB 2.0/1.1 ports (4 ports on the back panel, 4 ports available through the internal USB headers)
Chipset + Renesas® uPD720210 USB 3.0 Hub:
  1. 4 x USB 3.0/2.0 ports on the back panel
Internal I/O Connectors
  1. 1 x 24-pin ATX main power connector
  2. 1 x 8-pin ATX 12V power connector
  3. 1 x PCIe power connector
  4. 1 x M.2 Socket 3 connector
  5. 1 x SATA Express connector
  6. 10 x SATA 6Gb/s connectors
  7. 1 x CPU fan header
  8. 1 x water cooling fan header (CPU_OPT)
  9. 3 x system fan headers
  10. 1 x front panel header
  11. 1 x front panel audio header
  12. 2 x USB 3.0/2.0 header
  13. 2 x USB 2.0/1.1 headers
  14. 1 x Trusted Platform Module (TPM) header
  15. 1 x Thunderbolt add-in card connector
  16. 1 x Clear CMOS jumper
Back Panel Connectors
  1. 1 x PS/2 keyboard port
  2. 1 x PS/2 mouse port
  3. 4 x USB 3.0/2.0 ports
  4. 4 x USB 2.0/1.1 ports
  5. 1 x RJ-45 port
  6. 1 x optical S/PDIF Out connector
  7. 5 x audio jacks (Center/Subwoofer Speaker Out, Rear Speaker Out, Line In, Line Out, Mic In)
  8. 2 x Wi-Fi antenna connector holes
I/O Controller
  1. iTE® I/O Controller Chip
H/W Monitoring
  1. System voltage detection
  2. CPU/System/Chipset temperature detection
  3. CPU/CPU OPT/System fan speed detection
  4. CPU/System/Chipset overheating warning
  5. CPU/CPU OPT/System fan fail warning
  6. CPU/CPU OPT/System fan speed control
    * Whether the fan speed control function is supported will depend on the cooler you install.
BIOS



  1. 2 x 128 Mbit flash
  2. Use of licensed AMI UEFI BIOS
  3. Support for DualBIOS™
  4. Support for Q-Flash Plus
  5. PnP 1.0a, DMI 2.7, WfM 2.0, SM BIOS 2.7, ACPI 5.0
Unique Features
  1. Support for APP Center
    * Available applications in APP Center may differ by motherboard model. Supported functions of each application may also differ depending on motherboard specifications.
    @BIOS
    Ambient LED
    EasyTune
    EZ Setup
    Fast Boot
    Cloud Station
    ON/OFF Charge
    Smart TimeLock
    Smart Recovery 2
    System Information Viewer
    USB Blocker
    V-Tuner
  2. Support for Q-Flash
  3. Support for Smart Switch
  4. Support for Xpress Install
Bundle Software
  1. Norton® Internet Security (OEM version)
  2. Intel® Smart Response Technology
  3. cFosSpeed
Operating System
  1. Support for Windows 10/8.1/8/7
Form Factor
  1. ATX Form Factor; 30.5cm x 24.4cm


    5 - GX 750 Cooler Master - 




    • Temperature
      • 0 to 40 °C (Operating Temperature)
    • Protection
      • Over Voltage Protection (OVP), Under Voltage Protection (UVP), 
      • Over Temperature Protection (OTP), Short Circuit Protection (SCP)
    • Safety Standards
      • KCC, 80 Plus, CCC, GOST, C-Tick, TUV, FCC, CE, UL, BSMI

    7 - NVIDIA GEOFORCE GT 610




    GPU Engine Specs:
    48CUDA Cores
    810Base Clock
    6.5Texture Fill Rate (billion/sec)
    Memory Specs:
    1.8 GbpsMemory Clock
    1024MBStandard Memory Config
    DDR3Memory Interface
    64-bitMemory Interface Width
    14.4Memory Bandwidth (GB/sec)
    Feature Support:
    4.2OpenGL
    PCI Express 2.0Bus Support
    YesCertified for Windows 7
    CUDA, PhysXSupported Technologies
    Display Support:
    12 APIMicrosoft DirectX
    YesMulti Monitor
    2560x1600Maximum Digital Resolution
    2048x1536Maximum VGA Resolution
    YesHDCP
    YesHDMI1
    Dual Link DVI-I, HDMI, VGAStandard Display Connectors
    InternalAudio Input for HDMI
    Standard Graphics Card Dimensions:
    5.7 inchesLength
    2.7 inchesHeight
    Single-widthWidth
    Thermal and Power Specs:
    102 CMaximum GPU Temperature (in C)
    29 WMaximum Graphics Card Power (W)
    300 WMinimum System Power Requirement (W)
    3D Vision Ready
    Yes3D Blu-Ray
    Yes3D Photos

    It's Time to Love Each Other 



    Fingers Crossed hope these new components love each other as they have been doing wishing them a Very Happy Married Life.

    Believe me Guys lot of motivation is Required when it comes to building your own Home Lab and that too from Scratch, Thank's to one of my colleague who supercharged me.

    Do let me know if you want me to perform some special use cases or some integration that you always have been waiting to see or maybe some new functionalities that you want me to test with.