On 06/08/2011 12:50 PM, Klaus Darilion wrote:
> I'm not sure if this question is better on corosync or pacemaker mailing
> list - please advice if I'm wrong.
>
>
> I have the following setup:
>
>             Switch
>            /      \
>           /        \
>      eth0/          \eth0
>         /            \
>   Server1<------->  Server2
>              eth1
>
> eth1 is used for DRBD replication.
>
> Now I want to use Pacemaker+Corosync to manage DRBD and the database
> which uses the DRBD block device as data partition. The database is
> accessed via an IP address in the eth0 network.
>
> I need to avoid a split-brain where DRBD becomes master on both server
> and the database is started on. I experimented with corosync on eth0 or
> eth1 or both (see other mail from today) but didn't find a proper solution.
>
> I think I have to add other constraints to avoid split-brain, e.g.
> pinging the default gateway. But pinging has a delay until the ping
> primitive in pacemaker detects a failure.
>
> I think adding a 3rd node would also help as then I could use a quorum
> to avoid split-brain.
>
> My questions: Where do I handle/avoid split-brain - on corosync layer or
> pacemaker layer?
>
> Is there a best practice how to handle such scenarios?
>
> Shall I use corosync over eth0, eth1 or both (rrp)?
>
> If I use a 3rd node just for quorum - is a plain "corosync" node
> sufficient or am I using also pacemaker with constraints to never run
> the DRBD+database service on node3?
>
> Thanks
> Klaus

If you are concerned about split-brain in DRBD, you can put the 
protection into the DRBD config file. Look at:

====
disk {
         # This tells DRBD to block I/O (resource) and then try to fence
         # the other node (stonith). The 'stonith' option requires that
         # we set a fence handler below. The name 'stonith' comes from
         # "Shoot The Other Nide In The Head" and is a term used in
         # other clustering environments. It is synonomous with with
         # 'fence'.
         fencing         resource-and-stonith;
}

# We set 'stonith' above, so here we tell DRBD how to actually fence
# the other node.
handlers {
         # The term 'outdate-peer' comes from other scripts that flag
         # the other node's resource backing device as 'Inconsistent'.
         # In our case though, we're flat-out fencing the other node,
         # which has the same effective result.
         outdate-peer    "/sbin/obliterate-peer.sh";
}
====

The "obliterate-peed.sh" script is built to tie into RHCS's 'fence_node' 
tool, but I am sure a pacemaker version exists or could be easily 
written/adapted.

-- 
Digimer
E-Mail:              [email protected]
Freenode handle:     digimer
Papers and Projects: http://alteeve.com
Node Assassin:       http://nodeassassin.org
"I feel confined, only free to expand myself within boundaries."
_______________________________________________
Openais mailing list
[email protected]
https://lists.linux-foundation.org/mailman/listinfo/openais

Reply via email to