Typically in a cluster, a certain type of quorum scheme is selected, which ensures that, in the event of a split of the cluster, only one partition of the cluster can offer services. This avoids possible corruptions. These quorum schemes include the following:
- Node majority – Each node in the cluster has a vote
- Node and disk majority – Each node in the cluster has a vote as does a shared disk
- Node and file share majority – Each node in the cluster has a vote as does a file share (the file share witness)
- Disk only – Only a shared disk has a vote (this isn’t typically used)
The following table describes clusters based on the number of nodes and other cluster characteristics, and lists the quorum mode that is recommended in most cases.
|Description of cluster||Quorum recommendation|
|Odd number of nodes||Node Majority|
|Even number of nodes (but not a multi-site cluster)||Node and Disk Majority|
|Even number of nodes, multi-site cluster||Node and File Share Majority|
|Even number of nodes, no shared storage||Node and File Share Majority|
One challenge in the past with any majority calculation is that more than 50 percent of the votes are needed. If nodes were stopped or taken down, their vote would no longer be available.
In a scenario of five nodes, where three nodes were shut down for maintenance in a planned situation, that would remove three votes, and now the whole cluster would shut down because the two remaining votes aren’t >50 percent of the vote.
Dynamic quorum modifies the vote allocation to nodes dynamically. For example, if a node is shut down as part of a planned configuration, its vote is removed and therefore no longer counted in quorum calculations.
Consider a five-node cluster. With dynamic quorum, as the three nodes were shut down in a planned manner, their votes are removed, leaving only two votes remaining, allowing the cluster to maintain quorum and stay functioning since those two votes are available on the two remaining nodes.
As nodes are started again, they have their votes given back and therefore participate in the quorum calculations. Note that if you manually removed the vote of a node in a cluster, then dynamic quorum can’t give it a vote.
Dynamic quorum is enabled by default in a Windows Server 2012 cluster and can be changed if you select the Advanced quorum configuration and witness selection option.
To check the current votes of a cluster, run the Get-ClusterNode Windows PowerShell cmdlet– the DynamicWeight property shows if it has a vote or not.
—- ————- ———- —–
savdalfc01 1 1 Up
savdalfc02 1 1 Up
One question that’s commonly asked, is, if you have only two nodes in the cluster and are using node majority only, how does the dynamic quorum work?
When there are only two nodes left, one of the nodes loses its DynamicWeight so only one of the nodes now has a vote (this is chosen randomly). This assures that if the second node crashes (the node without a vote), the first node can stay active (giving you a 50/50 chance of surviving an unplanned failure of a node).
If the first node was taken down cleanly, then the second node would be given the vote and the cluster would stay online. A summary of the two remaining node circumstances is shown below, NodeA and NodeB. In this example, NodeA was given the vote and NodeB has no vote:
- If NodeB goes down (node without a vote), the cluster stays up with NodeA (last man standing).
- If NodeA and NodeB lose communication, the cluster stays up with NodeA (last man standing).
- If NodeA goes down in an unplanned scenario, then the cluster goes down as NodeB doesn’t have a current vote to survive.
- If NodeA is gracefully shut down, the cluster removes NodeA’s current vote and gives NodeB’s current vote back and the cluster stays up with NodeB (last man standing).
- If NodeB is gracefully shut down, the cluster stays up with NodeA (since it has the vote and is last man standing).
For the greatest protection from unplanned failure with two nodes, you would want an additional witness configured (such as a file share or disk). This configuration gets much better in Windows Server 2012 R2.