How to configure High Availability LVM i.e. HA LVM on CentOS or RHEL 7 Linux without GFS2. How can I configure my cluster service or resource to activate and monitor shared LVM volumes? Uncontrolled simultaneous access to shared storage can lead to data corruption. Storage access must be managed like any other active/passive service - it must only be active on a single machine at a time. Is there a way to configure LVM volumes for active/passive use in a High Availability cluster?

To configure HA LVM, as the name suggests you need logical volumes. So make sure you have logical volumes and volume groups with the same name on all your nodes of the cluster.
Earlier I had shared an article about Cluster Architecture and Types of Clusters and also step by step guide to configure HA Cluster with three nodes. Now later I had to remove one of the cluster node to demonstrate you about two node cluster setup and it’s configuration.
So I will continue to use the same setup for demonstration of this article. As you see I already have some resource groups which are part of my cluster. I have written another article to help you understand all about resource groups and resource constraints on Cluster.
[root@node1 ~]# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: node1.example.com (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Wed Dec 26 18:37:51 2018
Last change: Wed Dec 26 18:37:29 2018 by root via cibadmin on node1.example.com
2 nodes configured
6 resources configured
Online: [ node1.example.com node2.example.com ]
Full list of resources:
Resource Group: apache-group
apache-ip (ocf::heartbeat:IPaddr2): Started node2.example.com
apache-service (ocf::heartbeat:apache): Started node2.example.com
Resource Group: ftp-group
ftp-ip (ocf::heartbeat:IPaddr2): Stopped
ftp-service (systemd:vsftpd): Stopped
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Setting up the environment
My cluster is running on RHEL 7 installed on Oracle VirtualBox. You can install Oracle VirtualBox also on your RHEL/CentOS Linux host.
Now before starting with the HA LVM cluster configuration, let us create
our logical volumes and volume groups on both the cluster node. Now on
node1 and node2 I have /dev/sdc and /dev/sdb as additional
storage connected to the nodes respectively.
Create physical volume
[root@node1 ~]# pvcreate /dev/sdc
Physical volume "/dev/sdc" successfully created.
Create volume group on /dev/sdc
[root@node1 ~]# vgcreate vgcluster /dev/sdc
Volume group "vgcluster" successfully created
Lastly create a logical volume. here we are creating a logical volume
with 400MB size and name as lvcluster on vgcluster volume group
[root@node1 ~]# lvcreate -L 400M -n lvcluster vgcluster
Logical volume "lvcluster" created.
For our demo I will assign XFS filesystem to our lvcluster lvm.
[root@node1 ~]# mkfs.xfs /dev/vgcluster/lvcluster
meta-data=/dev/vgcluster/lvcluster isize=512 agcount=4, agsize=25600 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=102400, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=855, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Validate the changes
[root@node1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root centos -wi-ao---- <15.78g
swap centos -wi-ao---- 760.00m
lvcluster vgcluster -wi-a----- 400.00m
[root@node1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 2 2 0 wz--n- <17.52g 1020.00m
vgcluster 1 1 0 wz--n- <8.00g <7.61g
Ensure that locking_type is set to 1 as needed, and that
lvm2-lvmetad is disabled, by running the following command:
[root@node1 ~]# lvmconf --enable-halvm --services --startstopservices
Warning: Stopping lvm2-lvmetad.service, but it can still be activated by:
lvm2-lvmetad.socket
Removed symlink /etc/systemd/system/sysinit.target.wants/lvm2-lvmetad.socket.
Create LVM resource group for LVM
Now we can create the HA LVM resources. Where LVM is a generic resource
that we are going to use in the cluster, and halvm is just the name
that we are assigning. volgrpname equals vgcluster, exclusive is
true, and we are going to put it in a group with name halvmfs.
[root@node1 ~]# pcs resource create halvm LVM volgrpname=vgcluster exclusive=true --group halvmfs
Assumed agent name 'ocf:heartbeat:LVM' (deduced from 'LVM')
Now that we have created the resource for the cluster, we can verify that it has indeed been started.
[root@node1 ~]# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: node1.example.com (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Wed Dec 26 18:34:28 2018
Last change: Wed Dec 26 18:34:14 2018 by root via cibadmin on node1.example.com
2 nodes configured
5 resources configured
Online: [ node1.example.com node2.example.com ]
Full list of resources:
Resource Group: apache-group
apache-ip (ocf::heartbeat:IPaddr2): Started node2.example.com
apache-service (ocf::heartbeat:apache): Started node2.example.com
Resource Group: ftp-group
ftp-ip (ocf::heartbeat:IPaddr2): Stopped
ftp-service (systemd:vsftpd): Stopped
Resource Group: halvmfs
halvm (ocf::heartbeat:LVM): Started node2.example.com
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
As you see our HA LVM service has started successfully on
node2.example.com. Now we need to take care of the file system, and the file
system needs to be mounted somewhere.
Create resource group to mount the file system
For the sake of this article to configure HA LVM I will create /xfs
directory as my mount point on both my cluster nodes.
[root@node1 ~]# mkdir /xfs
[root@node2 ~]# mkdir /xfs
Next we need to create a resource that mounts the file system through the cluster.
[root@node1 ~]# pcs resource create xfsfs Filesystem device="/dev/vgcluster/lvcluster" directory="/xfs" fstype="xfs" --group halvmfs
Here we are creating a resource with Filesystem type for our logical
volume /dev/vgcluster/lvcluster which should be mounted on /xfs and
let it be part of our existing halvmfs group
So our last command execution was successful so let us validate the
pcs cluster status.
[root@node1 ~]# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: node1.example.com (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Wed Dec 26 18:37:51 2018
Last change: Wed Dec 26 18:37:29 2018 by root via cibadmin on node1.example.com
2 nodes configured
6 resources configured
Online: [ node1.example.com node2.example.com ]
Full list of resources:
Resource Group: apache-group
apache-ip (ocf::heartbeat:IPaddr2): Started node2.example.com
apache-service (ocf::heartbeat:apache): Started node2.example.com
Resource Group: ftp-group
ftp-ip (ocf::heartbeat:IPaddr2): Stopped
ftp-service (systemd:vsftpd): Stopped
Resource Group: halvmfs
halvm (ocf::heartbeat:LVM): Started node2.example.com
xfsfs (ocf::heartbeat:Filesystem): Started node2.example.com
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
As you see our resource xfsfs has been successfully started on
node2.example.com
Validate HA LVM configuration
Now since we know our HA cluster resource is running on
node2.example.com, validate if the logical volume is successfully
mounted on /xfs
[root@node2 ~]# mount | grep xfs
/dev/mapper/vgcluster-lvcluster on /xfs type xfs (rw,relatime,attr2,inode64,noquota)
So all looks good and working as expected.
Validate HA LVM failover
Now let us do the validation to make sure failover works for our HA
LVM cluster resource. So for this purpose we will change
node2.example.com state to standby
[root@node2 ~]# pcs cluster standby node2.example.com
Now validate the pcs cluster status
[root@node2 ~]# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: node1.example.com (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Wed Dec 26 18:39:56 2018
Last change: Wed Dec 26 18:39:32 2018 by root via cibadmin on node2.example.com
2 nodes configured
6 resources configured
Node node2.example.com: standby
Online: [ node1.example.com ]
Full list of resources:
Resource Group: apache-group
apache-ip (ocf::heartbeat:IPaddr2): Started node1.example.com
apache-service (ocf::heartbeat:apache): Started node1.example.com
Resource Group: ftp-group
ftp-ip (ocf::heartbeat:IPaddr2): Stopped
ftp-service (systemd:vsftpd): Stopped
Resource Group: halvmfs
halvm (ocf::heartbeat:LVM): Started node1.example.com
xfsfs (ocf::heartbeat:Filesystem): Started node1.example.com
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
As we see now our HA LVM cluster resource is started on
node1.example.com since node2 is on standby
Next check if the logical volume lvcluster is mounted on node1
[root@node1 ~]# mount | grep xfs
/dev/mapper/vgcluster-lvcluster on /xfs type xfs (rw,relatime,attr2,inode64,noquota)
So our failover is also working.
Lastly I hope the steps from the article to configure HA LVM on a Cluster in Linux was helpful. So, let me know your suggestions and feedback using the comment section.


