Before starting, I hope you are familiar with different Cluster types and it’s Architecture. In this article I will explain the steps to configure two node cluster on CentOS / RHEL 7 Linux node. Now for the sake of this article I am using Oracle VirtualBox installed on my Linux Server.

How a two-node cluster is different from a cluster with 3 or higher node?
Quorum is the minimum number of cluster member votes required to perform a cluster operation. Without quorum, the cluster cannot operate. Quorum is achieved when the majority of cluster members vote to execute a specific cluster operation. If the majority of the cluster members do not vote, the cluster operation will not be performed.
In a two-node cluster configuration, the maximum number of expected votes is two with each cluster node has one vote. In a failure scenario when any one of the node goes down, only one node is active and it has only one vote. In such a configuration, quorum cannot be reached, since a majority of the votes cannot be delivered. The single cluster node is stuck at 50 percent and will never get past it. Therefore, the cluster will never operate normally this way.
2-Node Cluster Challenges
- **Quorum problems:**More than half is not possible after a failure in the 2-node cluster
- Split brain can happen. With fencing enabled, both nodes will try to fence one another.
- The cluster won’t start until all nodes are available. This is
something that can easily be disabled using the
wait_for_allparameter
How to configure two-node cluster with CentOS / RHEL 7 Linux?
If you are configuring a two node cluster on the CentOS 7 cluster stack,
you should enable the two_node cluster option. Before starting with
the configuration changes, stop your cluster services
[root@node1 ~]# pcs cluster stop --all
node1.example.com: Stopping Cluster (pacemaker)...
node2.example.com: Stopping Cluster (pacemaker)...
node1.example.com: Stopping Cluster (corosync)...
node2.example.com: Stopping Cluster (corosync)...
Next add the following parameter to the corosync.conf under quorum
section:
# vim /etc/corosync/corosync.conf
quorum {
provider: corosync_votequorum
two_node: 1
wait_for_all: 0
}
By enabling the two_node cluster option, the quorum is artificially
set to 1, which means that the cluster will be quorate and continue
to operate even in the event of a failure of one cluster node.
two_node cluster option automatically enables an
additional wait_for_all option.
Let us check the cluster status, as you see we have additional flags enabled for our two node cluster
[root@node1 ~]# corosync-quorumtool
Quorum information
------------------
Date: Wed Dec 26 16:08:19 2018
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 1
Ring ID: 1/356
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 1
Flags: 2Node Quorate
Membership information
----------------------
Nodeid Votes Name
1 1 node1.example.com (local)
2 1 node2.example.com
Now let us try to stop one of the cluster node
[root@node1 ~]# pcs cluster stop node2.example.com
node2.example.com: Stopping Cluster (pacemaker)...
node2.example.com: Stopping Cluster (corosync)...
As you observed this time the tool didnot prevented us from stopping the cluster node as it did earlier.
Let us check the quorum status
[root@node1 ~]# corosync-quorumtool
Quorum information
------------------
Date: Wed Dec 26 16:09:30 2018
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 1
Ring ID: 1/360
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 1
Quorum: 1
Flags: 2Node Quorate
Membership information
----------------------
Nodeid Votes Name
1 1 node1.example.com (local)
So our cluster is functioning even with one node active only.
Let us also understand some other basic terminologies associated with corosync configuration
- wait_for_all (default: 0): The general behavior of the
votequorumprocess is to switch from inquorate to quorate as soon as possible. As soon as the majority of nodes are visible to each other, the cluster becomes quorate. Thewait_for_alloption, orWFA, allows you to configure the cluster to become quorate for the first time, but only after all the nodes have become visible. If thetwo_nodeoption is enabled, thewait_for_alloption is automatically enabled as well. - last_man_standing (default: 0) / last_man_standing_window (default:
10): The general behavior of the
votequorumprocess is to set theexpected_votesparameter and quorum at startup. Enabling thelast_man_standingoption, orLMS, allows the cluster to dynamically recalculate theexpected_votesparameter and quorum under specific circumstances. It is important to enable theWFAoption when using theLMSoption in high-availability clusters. - auto_tie_breaker (default: 0): When the
auto_tie_breakeroption, or ATB, is enabled, the cluster can suffer because of up to 50 percent of the nodes failing at the same time. The cluster partition, or the set of nodes that are still in contact with the node that has the lowestnodeidparameter, will remain quorate. The other nodes will be inquorate.
Lastly I hope the steps from the article to configure two-node cluster on Linux ( CentOS / RHEL 7 ) was helpful. So, let me know your suggestions and feedback using the comment section.


