Hello,

I am posting here as a recommendation from an ocfs2-users response for my
original post: 
http://oss.oracle.com/pipermail/ocfs2-users/2011-April/005046.html

Excerpt:
  
I am running a two-node web cluster on OCFS2 via DRBD Primary/Primary
(v8.3.8) and Pacemaker. Everything  seems to be working great, except during
testing of hard-boot scenarios.

Whenever I hard-boot one of the nodes, the other node is successfully fenced
and marked ³Outdated²

* <resource minor="0" cs="WFConnection" ro1="Primary"
ro2="Unknown"ds1="UpToDate" ds2="Outdated" />

However, this locks up I/O on the still active node and prevents any
operations within the cluster :( I have even forced DRBD into StandAlone
mode while in this state, but that does not resolve the I/O lock either.

...does anyone know if this is possible using OCFS2 (maintaining an active
cluster in Primary/Primary when the other node has a failure? E.g. Be it
forced, controlled, etc) Is ³qdisk² a requirement for this to work with
Pacemaker?

NOTE: On a reply to my original post (URL above) I also provided an example
CIB that I have been using during testing.


--
Linux-cluster mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to