Corey Kovacs wrote:
I am building a new 5 node cluster and having been going back and
forth on how I want to set up my clustered volumes. I began with a
single VG where all the volumes were created which worked exactly as
expected. After talking to some other people, turns out they have
experienced problems with VG's getting corrupted etc. It's never
happened to me but hey, it's possible. So I thought of splitting the
volumes onto dedicated VG's to divide the risk so to speak, but that
doesn't seem very "clean".

My question is this. What are considered "Best Practices" regarding
LVM2 and it's use on clustered or non-clustered systems?

I've only seen this happen when a system is allowed to access volumes owned by a cluster of which it is not a functioning member. Once all the hosts have CLVM up and talking to each other, you should be fine. The greatest danger is when you're adding a new host to an already-running production cluster, because you have less freedom to test and confirm that CLVM is keeping everything in sync.

For clustered systems, I would observe the following guidelines:

1) If the set of hosts that can access volume A is different from the set of hosts that can access volume B, they should be in different volume groups.

2) If the set of hosts that can access volume A is identical to the set of hosts that can access volume B, and this is not expected to change, they should be in the same volume group. Otherwise, you're just creating more potential failure points (RAID 0 style) if something does go wrong. Yes, it's less of a pain to restore one volume from backup than two, but your production cluster will probably go down if only one of them has a problem.

3) Don't let a new host touch the shared storage until you've confirmed it can communicate with the cluster and bring CLVM up properly. If you're using storage fencing, this should be trivial.

4) Set aside a little bit of shared storage for testing, so you can make sure CLVM is syncing everything up properly when adding a new host to a running production cluster, without endangering the data you care about.

For all systems, clustered and not, I try to make sure that volume group names are unique, using the hostname as a template for unshared volumes and the cluster name as a template for shared volumes. Bad things can happen when there are two different "vg00" groups on the same SAN that are accidentally left visible to other hosts.

-- Chris

_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to