On Thu, Jul 14, 2011 at 9:02 AM, RNZ <[email protected]> wrote:

>
>
> On Thu, Jul 14, 2011 at 6:42 PM, Serge Dubrouski <[email protected]>wrote:
>
>>
>>
>> On Thu, Jul 14, 2011 at 5:50 AM, RNZ <[email protected]> wrote:
>>
>>>
>>>
>>> On Thu, Jul 14, 2011 at 3:28 PM, Florian Haas 
>>> <[email protected]>wrote:
>>>
>>>> On 2011-07-14 12:55, RNZ wrote:
>>>> >
>>>> >
>>>> > On Thu, Jul 14, 2011 at 2:02 PM, Florian Haas <
>>>> [email protected]
>>>> > <mailto:[email protected]>> wrote:
>>>> >
>>>> >     On 2011-07-14 08:46, RNZ wrote:
>>>> >     > No, I want and I need - multi-master scheme (more then two
>>>> nodes)...
>>>> >
>>>> >     There is nothing in Pacemaker's master/slave scheme that restricts
>>>> you
>>>> >     to a single master. The ocf:linbit:drbd resource agent, for
>>>> example, is
>>>> >     configurable in dual-Master mode.
>>>> >
>>>> >     Once the resource agent properly implements the functionality (the
>>>> hard
>>>> >     part), configuring a multi-master master/slave set is simply a
>>>> question
>>>> >     of setting the master-max meta parameter to a value greater than 1
>>>> (the
>>>> >     easy part).
>>>> >
>>>> > I don't think so... Couchdb RESTful API very easy allow running
>>>> > repliacate by next scheme:
>>>>
>>>> It's entirely possible that the couchdb native API may be more powerful
>>>> in specific regards, but if you want to put it into a Pacemaker cluster
>>>> you may have to occasionally accept some minor limitations. That's a
>>>> tradeoff which is present for all Pacemaker managed applications.
>>>
>>> I understand that. But I don't understand why pacemaker not allow to
>>> change resource location, based on other resource state? It's like a
>>> self-evident functional... Did not it?
>>>
>>
>> Why it does you can use collocation or groups. If you can somehow separate
>> one instance of you CouchDB from other (Andrew suggested a master role for
>> example) you can tie up vIP to that instance with a collocation constraint.
>> Other way if you uniquely identify all of your instances than again you can
>> collocate you vIP with one of them.
>>
> Collocation need clone, and potential my be split-brain (couchdb-1 on more
> then one nodes)...
>

Ahh! I didn't notice that you don't want to your CouchDB resources to fail
over to the other nodes ever. Quite none standard configuration. Your
approach should probably work though.

Group - now it use in situation if couchdb-1 - fail/stop?
>
> I think more natural be use example of next rule for location:
>
> node vub001
> node vub002
> primitive couchdb-1 ocf:heartbeat:couchdb \
> ...
> location vIP-L1 \
>          rule 100: #uname vub001 \
>          rule inf: #ra-state couchdb-1 eq 0 \
> location vIP-L2 \
>          rule 10: #uname vub002 \
>          rule inf: #ra-state couchdb-2 eq 0 \
>
>
>
> _______________________________________________________
> Linux-HA-Dev: [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
> Home Page: http://linux-ha.org/
>
>


-- 
Serge Dubrouski.
_______________________________________________________
Linux-HA-Dev: [email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
Home Page: http://linux-ha.org/

Reply via email to