Start Here Technology Literatura   About me
Lab Setup for Failover Clustering (exam 70-643)

There are basically two ways to provide relisiance in a Microsoft environment: Network Load Balancing and Failover Cluster. Microsoft recommends a Failover Clustering High Availability system for databases (where the data needs to be shared between the hosts) and Network Load Balancing or NLB Cluster for web services where each host can keep an individual copy of the data

On this lab we are going to create a Network Load Balancing cluster to support IIS and then a Failover Clustering for services of Hyper-V

Cluster Shared Volumes (CSV) is a feature on Windows Server 2008 R2 that can be use with the Hyper-V role, it consists of a volume in a Failover Cluster where multiple nodes can read and write at the same time.

Step 1: Setup an NLB Cluster

Let's first explore the concept of Network Load Balancing by create such cluster between our 2 x web servers, WebServer01 (Full Installation) and WebServer02 (Core Installation)


1.- On each one of the server run this command to install the Network Load Balancing feature:

dism /online /Enable-feature /featurename:NetworkLoadBalancingHeadlessServer

2.- Once installed, run the NLB Manager from WebServer01, select to create a New Cluster and type the name of the server

3. Add the IP to be used as the Cluster IP address

4. Name the cluster "Widget", which is the name of the website we are going to protect, and ensure the Cluster Operation Mode is set to Multicast to allow inter-host communication through the only NIC that we have

5.- Edit the port rule and ensure the cluster IP is set, as well as the filtering mode to Multiple Hosts

6.-This is how our NLB Cluster should look at the end


7.- And finally, add an A records on DNS to point to our cluster, once users are directed to that record it will the job of the Cluster to balance out the requests


Though I give my credits on the References section at the bottom of each page, I'd like to congratulate in here to Karim Etalov for this fantastic article:


Stuff that is good to know about High Availability (NLB Cluster)

  1. DNS Round Robin and Network Load Balancing; used for services and applications wich maintain an internal data storage, where each node participating in the NLB cluster has a copy of the data
  2. DNS Round Robin with Netmask Ordering are enable by default on DNS; they are normally use for IIS and what you do is to enter 2 x CNAME (alias) for the same name of the web you want to support pointing to the IP addresses of each of the different IIS server that host an exact copy of the site; when users access the site the DNS will re-direct them to any of the 2 x IP addresses; not really load-balancing if one of the IIS server has a low spec CPU as it will get the same hits as other servers with more capable cpu processing power
  3. Network Load Balancing is normally use for VPNs and RDP
  4. High Availability is a combination of two things:
    1. Fault Tolerance (meaning): resources will remain available when an expected failure occurs
    2. Redundancy (meaning): the existance of duplicate components that will grant availability in case a single component breaks
  5. Maximum number of hosts allowed on a cluster is 32


Step 2: Create a Failover Cluster (preparation steps)

1.- Using VMware Workstation, created the following VMs:

  • 70-643-N1; install Microsoft Hyper-V Core edition
  • 70-643-N2; install Microsoft Hyper-V Core edition
  • 70-643-DS; install Microsoft Server 2008 R2 Enterprise Edition, this will be our Data Store (DS) for the cluster

2.- Bind the virtual LAN of all those machines to VMnet3, the same lan on which our 70-643 Hyper-V is connected, then add these machines to the domain after configuring on them the IP address stated on the table below

3. Enable Remote Management on the 2 x core servers, as well as PowerShell; then run this command to disable the firewall, obviously not advisable on production environments!

C:\ netsh advfirewall set allprofiles state off

4.-Add the Failover Clustering feature on the 2 x server cores, you can do in either of these two ways:

  • Running the commnad: dism /online /enable-feature /featurename:FailoverCluster-Core
  • Use the sconfig utility to enable the feature
  • On a GUI 2008 R2 server your will use the feature: FailoverCluster-FullServer

5.- Once enable run dism /online /get-features /format:table and verify that it is enable on both servers

6.-Add an additonal network card to each one of the nodes on the clusters (server 70-643-N1 and 70-643-N2) and configure their new nic to be on 192.168.10.x as per the table of IP addresses. This will be the lan that will communicate with the cluster

6.- Add a couple of hard drives to our 70-643-DS server (100GB with the letter V:\ and 20GB with the letter W:\) Now on our data server 70-643-DS, download the Microsoft iSCSI Software Target 3.3 from here: (note that the installation cannot be started from a Remote connection). Once you install the iSCSI Software Target, right-click the "Devices" to create a Virtual Disk

7.- Microsoft is a bit funny on this: you need to add the extension .vhd when create the virtual drive:

8.- Enter the size in MB unit (e.g. 50,000 for a 50GB size) and ignore the add a iSCSI target at this moment when promped by the wizard. You should end up like this:

9.- Now launch the wizard to create a new iSCSI target, call it with the name of the server

10. By clicking on "Browse" the system should find the IQN Identifier, but if it doens't click on "Advanced" and choose DNS and the identifier type, entering the name of our data server

11. Add our previously created virtual disk to the newly created iSCSI Target

12. And finally, edit the properties of the iSCSI Target and add our 2 x server initiators to it... can't find them? Read below to instal the iSCSI initiator on the Server 2008 R2 core editions

12. Jumping now to our 2 x node core servers, type on the command prompt iscsicpl to launch the Microsoft iSCSI initiator

13. Once the applet opens, type the name of our target and click on "Quick Connect", it should find it...

14. Create now another virtual hard disk and add it to the target, make it about 15GB. That would be our witness disk for the cluster

15. Now that we have both nodes with the datastore attached, it is time to configure the partition on one of them, the other will pick up the setting the minute we refresh. So go to server 70-643-N1 and type these commands:

  • diskpart > rescan
  • diskpart > list disk
  • diskpart > select disk 1
  • diskpart > create partition primary
  • diskpart > format quick fs=ntfs
  • diskpart > assign letter W
  • diskpart > list volume

And then you should see something like this, with the W letter for the witness disk and the V letter for the data drive

16.- Almost there, but before jumping further, let's put a couple of VMs on this share disk so we are ready to play with them when. Add an additonal virtual drive to our main Hyper-V server, call it with 50GB, configure it with diskpart and create on it two folders called "sp-sql" and "sp-ui", where we'll export our SharePoint VMs to be protected in the cluster

17.-Open Hyper-V and export our VMs to that drive

18.- Shutdown our VMs "70-643" and "70-643-DS" and move the hard drive containg the exported Sharepoint VMs from one server to another, then create a new virtual disk and choose to copy the contents of the newly added disk (that disk that contains the expored VMs)

19.-Now we'll something extraordinay and delete the 50GB drive that we created earlier in the iSCSI Software Target console


18.- Then edit the Hyper-V settings of whichever of the 2 x Nodes has the cluster datastore actived and attached, and configure it so that both Virtual Hard Disks folder and Virtual Machine folder are poiting to the share datastore drive letter V:\


Wonderful! Now then is time to get our hands dirty with Failover Clustering, that would be cool!

Step 3: Create a Failover Cluster (Hyper-V clustering)

Right, let's visit our 2 x nodes and install Hyper-V by running this command:

DISM /Online /Enable-Feature /FeatureName:Microsoft-Hyper-V

1.- Then fire up the Failover Cluster Manager from our 70-643-DS virtual machine, and right-click on the main node to "Create a cluster..." Add our 2 x server nodes to the list

2.-Let's call the cluster name "Hyper-V-Cluster" and assign the IP address of as specified on our table below

3. We are defenitely ready to create a Cluster, so hit "Next"!

4.- By all means, have a look at the Report after the cluster creation, you'll notice how the Quorum has automatically being set to "Node adn Disk Majority", as the wizard will have detected the shared storage between the 2 x nodes

5. Open AD and notice as well that a new object has been created for this cluster, isn't that cool?

6. Shutdown our main 70-643 Hyper-V server and add a new virtual disk of about 50G, exporting to it the 2 x VMS that we want to protect on the Cluster. Then open the drive \\70-643-N1\v$ or whichever of the 2 x nodes that is hosting the cluster drive, and copy to it the 2 x VMs that already exported from our main Hyper-V 70-643. In principle, you should be able to export the VMs into one of our nodes, as I've done on the screenshot below:

7.-Finally, visit the FailOver Cluster Manager, right click "Services and applications" and choose "Configure a Service or Application..."

8. Select the "Virtual Machine" service, and then select both the VMs that we want to protect

9.- Finished the wizard and then go ahead and start both VMs from a Hyper-V console, then visit the Failover Cluster Manager and run a validation of the cluster, there should be no errors at all (visit the event log of each node and wipe out the System Log to remove any previous errors.

10.- And this is it, make a snapshot of all the Nodes and Datastores because, most likely, we'll break down the cluster with our testing! Good luck

11. Let's continue with the fun and enable Cluster Shared Volumes by clicking on the "Enable Cluster Shared Volumes..." link under Configure, just in the Cluster summary page



Stuff that is good to know about High Availability (Failover Clustering)

  1. Failover Clustering; use for applications (not services) which use an external share data storage such as a SAN
  2. The "Quorum" configuration of a cluster determines the point at which too many failures of the cluster elements (nodes and shared disks) will cause the cluster from running. For example, if you have 2 nodes and they stop communicating to one another (both of them are working, but the link between then is broken), then, who is to say which of the 2 nodes own the database, service o application being clustered? The avoid overriding and clashing of data, the quoroum concept comes into place: whichever of the 2 nodes achieves the 'quorum' state will hold the valid copy of the clustered service or application. There can be 4 possible quorum configurations:
    1. Node Majority [3,5,7, etc] (recommended for clusters with an odd number of nodes, 3, 5, 7 ,etc) Can sustain failures of half the nodes (rounding up) minus one. For example, a seven node cluster can sustain three node failures, and obviously the group the contain more nodes is the one taht achives the quorum status first
    2. Node and Disk Majority [2 plus disk, 4 plus disk, etc] (recommended for clusters with an even number of nodes) Can sustain failures of half the nodes (rounding up) if the disk witness remains online. For example, a six node cluster in which the disk witness is online could sustain three node failures. Can sustain failures of half the nodes (rounding up) minus one if the disk witness goes offline or fails. For example, a six node cluster with a failed disk witness could sustain two (3-1=2) node failures.
    3. Node and File Share Majority [different IP subnets in use] [(for clusters with special configurations) Works in a similar way to Node and Disk Majority, but instead of a disk witness, this cluster uses a file share witness. Note that if you use Node and File Share Majority, at least one of the available cluster nodes must contain a current copy of the cluster configuration before you can start the cluster. Otherwise, you must force the starting of the cluster through a particular node. For more information, see "Additional considerations" in Start or Stop the Cluster Service on a Cluster Node.
    4. No Majority [Disk Only] (not recommended) Can sustain failures of all nodes except one (if the disk is online). However, this configuration is not recommended because the disk might be a single point of failure.


To support this website, please click on the advert. Thank you! :-)
Guide: Table of IP Addresses


Server Name IP Address Roles and Features


Hyper-V Node 1



Hyper-V Node 2


iSCSI Target for storage
[Cluster virtual IP] Hyper-V-Cluster

Step 5: To configure e-mail for SharePoint and Exchange (SP Directory Management Service)

Right, one of the most challenging parts I found on the material of this exam was the stuff to do with SharePoint and Exchange, but then again I saw that as a great opportunity to get to know more about these wonderful Microsoft produts

To setup our environment I created 3 VMs:

  1. MAIL, an Exchange 2010 serverMS643\Ad
  2. SP-SQL, a SQL 2012 server to hold the SharePoint content database
  3. SP-UI, the SharePoint 2010 server to holds the graphical interfaces, sites collections, etc

At the time of installing SharePoint 2010 on the SP-UI server, I chose to create a SharePoint Farm, and pointed to SP-SQL as the server to hold the database farm, this is the Farm information of the SharePoint server: If we wanted to we could add more ShareProint server to the farm


Stuff you can do to learn SharePoint:

  1. As mention above, create a server to host the farm SQL database and then 2 x SharePoint server that be part of that farm

And in relation to configuring Exchange + SharePoint for e-mail, what's the hidden stuff to know? I would recommend the following:

  1. Ensure the SMTP service on the SharePoint is set to start automatically (not manually as it is the default)
  2. Ensure Domain Firewall are disabled accordingly to enable communication between the Exchange vs SharePoint servers
  3. When delegating the "SharePoint contacts" OU in Active Directory, choose the administrative account that manages the Central Administration Application Pool in the SharePoint server; you can check the name of that account by visiting the IIS console on the SharePoint server.

Stuff that is good to know about SharePoint and Exchange:

The proper steps to configure incoming e-mail messages on SharePoint 2010 are as follows:

  • Add the SMTP feature on the SharePoint 2010
  • Configure the incoming e-mail under Central Administration > System Settings > Configure Incoming E-mail
  • Select "Automatic" because we are using the local SMTP feature, otherwise use "Advanced" and configure a drop folder where SharePoint will look for e-mails that are arriving
  • To enable SharePoint Directory Management Services, follow these steps:
    1. Create an OU in AD dedicated to holding the contacts and distribution groups
    2. Delegate the "Create, Delete and Manage" user account access for the application identity of the Central Administration account to the OU
    3. The SMTP feature must be installed on the SharePoint
    4. Configure SharePoint to accpet relays from teh domain Exchange server
    5. Create a MX record for the Exchange server to route e-mail outside your organisation to an SMTP server
  • Create an A record for the FQDN of the SharePoint server on DNS
  • Create a local domain under the SMTP server in IIS, the address must match the FQDN of the server that receives e-mail
  • Add an SMTP connector to the Microsot Exchange server

The proper stepts to configure outgoing e-mail on SharePoint 2010 are as follows:

  • You can use the stadmin -o email command, but you must be a member of the Administrators group
  • You can also configure the e-mail by going to Central Administration and opening System Settings > Configure Outgoing e-mail settings, and then enter the SMTP server details, the from and the reply e-mail addresses
  • The SMTP server that you choose must be (obviously) connected to the Internet, and ensure the SMTP server allow anynomous access and accept e-mail messages relayed from the SharePoint server



Handy to know PowerShell commands about SharePoint:

  1. Just a humble list of commands that I find more interesting and prone to confusions:
    • Set-SPSite; allow to change the quota template for a site
    • Set-SPSiteAdministration; to configure a site collection (but not to change the quota)
    • psconfig.exe with the -configdb paremeter; to configure the SharePoint database

Step 6: Set Parent virtual hard drives as Read-Only

Once you have both VMs shutdown (the ParentGUI and the ParentCore), logon to our Hypervisor and set the .vhd drives that we have created as Read-Only by running attrib +R


And to end this setup, shutdown our HyperVisor and create a snapshot of it, or better a clone so you can use it for different Microsoft Exams. Good luck on exploring the 70-6xx world!

21st November 2014


Clustering and High-Availability. Thanks to Symon Perriman for this amanzing blog;

Microsoft Network Load Balancing Multicast and Unicast Operation Modes. It has to be WMware, awesome knowledge base

Load Balancing IIS Sites with NLB; fantastic article from Karim Elatov, all my thanks to him!

Configure the iSCSI initiator in Winodws Server Core or Hyper-V server; nice job Rick Vanoner, thank you

How to Clean Up cluster Nofdes from Destroyed Clusters; awesome article from DarmirB

Windows Server 2008 R2 Hyper-V Cluster Shared Volume nad NTLM; short but powerful article from Aidan Finn, great!

Understanding Quorum configurations in a Failover Cluster; another good do for TechNet

An error ocurred while attempting to start the selected virtual machine(s). 'name' could not initialize. If you are getting this ambiguous error message while trying to start a VM inside a nested Hyper-V, most likely you'd need to install hotfix number KB2517374, apply for it here:

hypervisor.cpuid.v0 = “FALSE”