Hi,

I have an LVM logical volume and used DRBD to replicate it to another server.
The /dev/drbd0 has PV/VG/LVs which are mostly working.
I have colocation and order constraints that bring up a VIP, promote DRBD and 
start LVM plus file systems.

The problem arises when I take the active node offline.
At that point the VIP and DRBD master move but the PV/VG are not 
scanned/activated, the file systems are not mounted
and "crm status" reports an error for the ocf:heartbeat:LVM resource

"Volume group [replicated] does not exist or contains an error!
Using volume group(s) on command line."

At this point the /dev/drbd0 physical volume is not known to the server and  
the fix requires

root# pvscan -cache /dev/drbd0
root# crm resource cleanup grp-ars-lvm-fs

Is there an ocf:heartbeat:LVM setting or /etc/lvm/lvm.conf settings to force 
the PV/VGs to come online?
It is not clear whether the RA script "exclusive" or "tag" settings are needed 
or there is a corresponding lvm.conf setting.

Is l"vm.conf write_cache_state = 0" recommended by the DRBD User Guide correct?

Thanks,
Darren


_______________________________________________
Users mailing list: [email protected]
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to