Windows Failover Clustering can keep your SQL Server databases online and your customers happy when hardware or operating system problems strike. Failover Clustering also makes your life easier when you need to do maintenance on a physical server: just fail your SQL Server instance over to another server and complete your work with minimal downtime.
However, be careful: If you misconfigure your cluster, then you are bound for poor SQL Server performance or extended unplanned downtime. You should not let an otherwise great high availability feature cause weak performance or outages. Let us learn how to avoid some of the most prevalent SQL cluster setup mistakes. It is essential that you properly plan your nodes and instances.
Start by considering the following:Nodes are the servers configured as members of the Windows Failover Cluster. Each node may be part of only one Failover Cluster and must be a member of a Windows domain.
Most people plan Windows Failover Clusters for SQL Servers, where both performance and availability are important. If this is the case, plan one node per SQL Server instance plus at least one additional passive node. Ask yourself, “How many servers can fail before performance is degraded?” That answer is the number of passive nodes you need in your cluster.
Each SQL Server instance can move from node to node, but each instance can be active on only one node at any given time. Every SQL Server instance has its collation, user and system databases, logins, and SQL Agent jobs configured, and all those components move along with the instance.
You might be tempted to put all your nodes and SQL Server instances in a single cluster. That seems attractive for cost savings on hardware: a four-node cluster with one passive node sounds better than a pair of two-node clusters, each with a passive node. However, consider the following factors:
For a Windows Failover Cluster Instance, all nodes must have access to shared storage: each SQL Server user and system database anywhere in the cluster will have just one copy of its data stored in a central location. (Foreshadowing: there is one exception to this. Read on!) The shared storage may be a large, sophisticated storage area network, or a smaller storage array.
To support clustered SQL Server in production, you need a non-production SQL Server staging cluster. The staging cluster should match production as closely as possible.
SOME FREQUENTLY ASKED QUESTIONS ABOUT DESIGNING CLUSTERS FOR SQL SERVER:It is easy to get confused when sizing hardware for a clustered SQL Server instance. There are tons of misinformation out there.
“BUY ENOUGH MEMORY. HOWEVER, DO NOT ASSUME YOU ARE LIMITED TO USING HALF OF IT!”Do not go cheap on memory: it is critical to SQL Server performance.
Many people incorrectly believe that if they have two instances on a two-node cluster, they need to set SQL Server’s Max Server Memory setting to 50% or less of the total memory of each node to protect performance in case both instances need to run on a single node.
Do not do that. On SQL Server 2005, if multiple SQL Server instances share a single node, they will negotiate with each other and balance memory use. You may use the Min Server Memory setting to prioritize one instance over another. (v)
This approach is also applicable when there is another memory-intensive process on the computer since it would ensure that SQL Server would at least get a reasonable amount of memory.
“BUY MULTIPLE PHYSICAL NETWORK INTERFACE CONTROLLERS.”As of Windows Server 2008 and higher, you no longer need to configure a separate heartbeat network for a Windows Failover cluster. Instead, you must make sure that your network path is redundant.
That means you need two physical network adapters or network interface controllers in each cluster node. You may bring these together into a single network team; the point is that communication must be able to continue if either physical network interface controller fails.
The network interface controllers must be connected to a network that has redundant switches, and this network is only for your SQL Server and cluster traffic. In other words, your storage network should not use this. If you use Internet small computer systems interface storage, it must have its own dedicated network interface controllers. (vi)
There is an exception to the rule mentioned above that all user and system databases must be on shared storage. Beginning in SQL Server 2012, you can configure tempdb on local storage for a clustered SQL Server instance. When performance is important, speeding up tempdb is attractive. solid state drive can be expensive to implement in shared storage devices.
Lots of people choose to install solid state drive into each server node to make tempdb fast. The solid state drives may be either 2.5-inch solid state drives or peripheral component interconnect Express cards. If you take this approach, remember:
When you are planning a new Windows Failover Cluster, the Windows Server version is important. Selecting the right Windows installation can make your cluster easier to manage and easier to keep online.
However, do not base your decision only on all the improved features. It is crucial to determine the best patch configuration for your windows server and select the latest version of SQL Server if possible.
USE THE LATEST WINDOWS SERVER WHENEVER POSSIBLEIf you are planning a new cluster for your SQL Server, identify the highest Windows Server version supported by your company. Always map out all the improvements the new operating system brings and the hotfixes it helps you avoid: your company may not support a new OS until you advocate for change.
DO NOT MIX SQL SERVER VERSIONSSometimes people ask if they can install different SQL Server versions on a single Windows Failover cluster. That is an uncommon configuration. It is called a side-by-side cluster deployment.
Although this is technically possible, I do not recommend it. Installation is complex: you must install each SQL Server version from lowest to highest when setting up every node.
Even worse, whenever you need to troubleshoot or seek support for an incident, your very specialized configuration makes it more di-cult to resolve and find the root cause for the problem.
You must understand quorum and configure it properly in the Windows Failover Cluster to keep your SQL Server databases online. That is tricky. The best practices for configuring quorum vary depending on the version of Windows you are using, the number of nodes you have, and the reliability of network communication between the nodes.
Failover clusters want to keep your databases online, but they have a problem: they must keep each SQL Server instance online only on a single node!
Quorum is a process by which each member of the cluster stops and takes attendance and checks which members of the cluster own each resource in the cluster. Failover Clustering got a lot better at checking quorum in Windows Server 2012 and the latest versions: instead of always counting the initial cluster configuration as the total number of possible votes, at any given time, the current members of the cluster are counted as the voting population.
Potential voting members of the cluster are:
Your cluster always wants to have an odd number of votes available. Clusters just hate a tie because it is challenging for them to know what to do!
Imagine you have a Windows Failover cluster with four nodes. You have not configured a disk witness of any kind. Each node is granted a single vote, so you have a total of four votes. One SQL Server instance is installed on the cluster, and it is currently residing on Node 2.
Suddenly, a network crisis occurs! Nodes 1 and 2 can see one another, but not Nodes 3 and 4. Nodes 3 and 4 can see each other, but neither can see Nodes 1 and 2.
What happens now when each node counts votes? Node 1 and Node 2 can see each other and count two votes out of four: that is not a majority. Node 3 and Node 4 are in the same predicament. Neither group has a majority: that means neither group has the unequivocal right to keep the SQL Server instance online.
Depending on your SQL Server version and whether you have established a Tiebreaker (available in SQL Server 2012R2 and onwards), the Windows Failover Cluster service may shut down on every node and take your databases offline with it.
Good news: you can decrease the chances of this happening! ASSIGN A WITNESS TO GET AN ODD NUMBER OF VOTESEvery cluster may have one witness. It also might not have a witness: it is up to you to decide.
The witness is a member of the cluster that helps the cluster decide where your SQL Server Instances should be active. You may choose to set up a witness using one of these configurations:
The Disk Witness may be a little more work to configure because this disk must be available on shared storage available to all of the nodes. However, the disk witness is a bit more sophisticated than the file share witness and holds more information that may be useful to the cluster when failures occur.
If you decide to use a witness, use a Disk Witness whenever possible.
REMOVE VOTES WHEN APPROPRIATEYou can manually remove the vote of a node. Removing a vote is useful, particularly when you have a cluster that spans data centers. In this example, Node 1 and Node 2 are in a primary data center. Node 3 is in a remote data center, and network communication is unreliable. You will only ever manually failover to the remote data center.
In this scenario, you can manually remove the vote from the node in the remote data center and use a witness to achieve an odd number of votes in the primary data center. Windows Server 2012 and onwards supports Dynamic Quorum features.
WINDOWS SERVER DYNAMIC QUORUM DYNAMICALLY ADJUSTS NODE VOTESThe Dynamic Quorum feature makes life easier. When this is enabled, the cluster will count up current members of the cluster and decide if it should dynamically remove a vote so that you have an odd number.
You can check dynamic weights in PowerShell. In this four-node cluster, I have allowed all four nodes a vote (NodeWeight), but the cluster has dynamically reduced the vote for the node ELVIS. That results in an odd number of total votes.
Node Dynamic Weight is visible in the graphical interface of the Windows Failover Cluster Manager beginning in Windows Server 2012 R2, 2016, and 2019 and later version of Windows.
Here are the cluster quorum recommendations for Windows Server 2019 (x):
The cluster can decide whether or not it should give the witness a vote based on the state of witness and the available number of nodes. It also provides more options, helping nodes in your primary datacenter stay online if you end up in a tied-vote situation between your primary and disaster recovery data centers.
RULES FOR DESIGNING QUORUMHere are a few rules to keep you sane when setting up your cluster:
Cluster Validation is the most important part of building and configuring a Windows Failover Cluster. Never skip this step. You should run a validation a few times:
Not all validation tests can be run online. Make sure validation results are perfect before going live to avoid having to schedule downtime.
TAKE EVERY WARNING OR ERROR SERIOUSLYValidation is the primary way your cluster can warn you about problems in the configuration. Never assume that warnings can be skipped.
Before you take the cluster into production use, make sure that you have run a full cluster validation and corrected every error and warning. If you do not correct a warning, document exactly why it is ignorable. Save a copy of your cluster validation reports off in a safe spot in case you need them for reference when trouble strikes.
WHAT IF YOU DID NOT BUILD THE CLUSTER?Sometimes the Database Administrator who manages the cluster is not the same person who chose the hardware, installed the Windows Failover Cluster Feature, and added the nodes to the cluster. Perhaps the cluster was handed off to you, and you were told, “The cluster is already.”
Translation: What this means is that it is time for you to make sure that Cluster Validation is perfect.