On 09/19/2013 11:54 AM, David Lang wrote:
> On Thu, 19 Sep 2013, Fabio M. Di Nitto wrote:
> 
>> On 09/19/2013 11:26 AM, David Lang wrote:
>>> On Wed, 18 Sep 2013, Digimer wrote:
>>>
>>>> On 18/09/13 23:16, Fabio M. Di Nitto wrote:
>>>>> On 09/19/2013 12:47 AM, David Lang wrote:
>>>>>> I'm trying to get a cluster up and running without using multicast,
>>>>>> I've
>>>>>> made my cluster.conf be the following, but the systems are still
>>>>>> sending
>>>>>> to the multicast address and not seeing each other as a result.
>>>>>>
>>>>>> Is there something that I did wrong in creating the cman segment
>>>>>> of the
>>>>>> file? unforutunantly the cluster.conf man page just referrs to the
>>>>>> corosync.conf man page, but the two files use different config
>>>>>> styles.
>>>>>>
>>>>>> If not, what else do I need to do to disable multicast and just use
>>>>>> udpu?
>>>>>
>>>>>>  <cman two_node="1" expected_votes="1">
>>>>>>   <totem vsftype="none" token="5000"
>>>>>> token_retransmits_before_loss_const="10" join="60" consensus="4800"
>>>>>> rrp_mode="none" transport="udpu">
>>>>>>     <interface ringnumber="0" bindnetaddr="10.1.18.0"
>>>>>> mcastport="5405"
>>>>>> ttl="1" >
>>>>>>       <member memberaddr="10.1.18.177" />
>>>>>>       <member memberaddr="10.1.18.178" />
>>>>>>     </interface>
>>>>>>   </totem>
>>>>>>  </cman>
>>>>>
>>>>> You don't need all of that...
>>>>>
>>>>> <cman two_node="1" expected_votes="1" transport="udpu"/>
>>>>>
>>>>> There is no need to specify anything else. memberaddresses et all will
>>>>> be determined by the node names.
>>>>>
>>>>> Fabio
>>>>
>>>> To add to what Fabio said;
>>>>
>>>> You've not setup fencing. This is not support and, when you use
>>>> rgmanager, clvmd and/or gfs2, the first time a fence is called your
>>>> cluster will block.
>>>>
>>>> When a node stops responding, the other node will call fenced to eject
>>>> it from the cluster. One of the first things fenced does is inform dlm,
>>>> which stops giving out locks until fenced tells it that the node is
>>>> gone. If the node can't be fenced, it will obviously never successfully
>>>> be fenced, so dlm will never start offering locks. This leaves
>>>> rgmanager, cman and gfs2 locked up (by design).
>>>
>>> In my case the nodes have no shared storage. I'm using
>>> pacemaker/corosync to move an IP from one box to another (and in one
>>> case, I'm moving a dummy resource that between two alerting boxes, where
>>> both boxes see all logs and calculate alerts, but I want to have only
>>> the active box send out the alert)
>>>
>>> In all these cases, split-brain situations are annoying, but not
>>> critical
>>>
>>> If both alerting boxes send an alert, I get identical alerts.
>>>
>>> If both boxes have the same IP, it's not great, but since either one
>>> will respond, the impact consists of any TCP connections being broken
>>> each time the ARP race winner changes for a source box or gateway (and
>>> since most cases involve UDP traffic, there is no impact at all in those
>>> cases)
>>>
>>> This is about as simple a use case as you can get :-)
>>
>> If you are running only a pool of VIPs, with no fencing, then you want
>> to consider making your life simpler with keepalived instead of
>> pcmk+corosync.
> 
> Thanks, I'll look into it. for all these two machine clusters, what I
> really want to use is heartbeat with v1 style configs, they were really
> trivial to deal with (I've had that on 100+ clusters, some going back to
> heartbeat 0.4 days :-)
> 
> But since that's no longer an option, I figured it was time to bite the
> bullet and move to pacemaker, and since RHEL is pushing
> pacemaker/corosync, that's what we setup.

Well based on what you tell me, there is little need for a "real"
cluster but rather use and deploy something even simpler such as keepalived.

Here corosync/pcmk seems "too much" to deploy for moving a few IP arounds.

But then again, feel free to test and try whatever you like best :)

Fabio

_______________________________________________
Openais mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/openais

Reply via email to