On Thu, 19 Sep 2013, Fabio M. Di Nitto wrote:

On 09/19/2013 11:35 AM, David Lang wrote:
On Thu, 19 Sep 2013, Fabio M. Di Nitto wrote:

You don't need all of that...

<cman two_node="1" expected_votes="1" transport="udpu"/>

There is no need to specify anything else. memberaddresses et all will
be determined by the node names.

Ok, that solves my problem in most cases (8 of the 10 clusters I'm
configuring right now)

in the other two clusters, I will actually have 4 boxes per cluster, and
I want a resource to run on only one of the four. I don't care which
one, and split brain is not a major problem (no shared storage)

is it enough to just specify 4 nodes? (leaving "two_node=1" in place),

two_node and expected_vote=1 are specific to cluster composed of two node.

do I just remove the two_node attribute? or does this get really ugly?

You need to remove also expected_vote.


An example of one of these two clusters.

Log alerting engines. All boxes in the cluster will receive copies of
all logs, and process them in parallel. I want to have only one of the
four boxes be the 'active' box that sends out alerts (my alert scripts
can test for the presense of a resource on the local box)

that's totally up to the application you are writing and how you
configure the IPs.

In this case, I'm not using pacemaker to manage the IPs, those are static on all 4 boxes, all I'm having it do is manage a dummy resource that the alerting scripts test for.


The four boxes are actually two pairs in each of two datacenters.

This will get ugly if connectivity between the datacenters goes kaboom
as you will have 2x2 clusters, neither of which can operate.

I figured I'd need to disable the quarum, or set expected_vote=2 or something like that.

There
is not going to be a quarum because neither half would have enough
systems to form one. In the common split brain case (two clusters of two
boxes because of datacenter crosslink outage), alerts will be generated
from both halves. This is a very tolerable 'worst case' (and may even be
the right thing, since for the duration of the split, each one may be
seeing logs that the other doesn't see, the alerting path may follow
different network connectivity so alerts may get through even if ring
packets don't)

IMHO you are using the wrong setup here.. I'd use keepalived here as
well since it works in multiple dc and doesn't need quorum or fencing.
It bases IP management on other criterias

Ok, I'll look into it.

David Lang
_______________________________________________
Openais mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/openais

Reply via email to