no_quorum_policy="ignore" achieves that, as you said yourself. What it
doesn't achieve, is safety for shared data. Hence STONITH.

If you want to run without quorum -> no_quorum_policy (this is really a
split-brain prevention setting...)
If you want security for shared data -> STONITH.

Pick and chose :)

Yan

Sander van Vugt wrote:
> You are absolutely right about the role of STONITH in all this. But: this
> doesn't change the fact that my services are stopped if the number of nodes
> that remains is below the quorum that is designed. I would really like to
> see that even if just one node remains, my services keep running anyway. 
> 
> Sander
> 
> 
>> -----Oorspronkelijk bericht-----
>> Van: [EMAIL PROTECTED] [mailto:linux-ha-
>> [EMAIL PROTECTED] Namens Yan Fitterer
>> Verzonden: woensdag 27 juni 2007 14:43
>> Aan: General Linux-HA mailing list
>> Onderwerp: Re: [Linux-HA] Quorum behavior
>>
>> STONITH.
>>
>> Second node fails: 3rd node takes over resources, but only after
>> verified power off (or restart) of 2nd node.
>>
>> Actually - same thing for 1st node.
>>
>> Challenge: Ensure that you don't lost network AND stonith at the same
>> time.
>>
>> Yan
>>
>> Sander van Vugt wrote:
>>> Hi list,
>>>
>>>
>>>
>>> Trying to think out a decent solution here, I run across the following.
>> I
>>> want to build a three node cluster where some services are running. Now
>> the
>>> following situation arises: I bring down one node for maintenance.
>> Shortly
>>> after that a second node fails. This causes the cluster to loose quorum.
>> The
>>> result? Just to be sure, the default "no_quorum_policy stop"  causes the
>>> cluster to stop all resources completely! Second attempt: I switch the
>>> no_quorum policy to "ignore". The resource keeps on running, so I am
>> happy.
>>> However, in that situation, if the nodes that fails just has a network
>>> failure and can therefore no longer see the rest of the cluster, a
>> perfect
>>> split brain arises on which the failing node as well as the remaining
>> node
>>> both start offering the services. Rather uncool if these involve shared
>> file
>>> systems that are not cluster safe :-). Third scenario: I set no_quorum
>>> policy to " freeze". At least, the services continue running on the
>>> remaining node, but services that were somewhere else, don't fail over
>>> automatically, so: still no high availability.
>>>
>>>
>>>
>>> So my question: I want my services to continue running, even if a number
>> of
>>> nodes remain that is less than the quorum. Is there any way this can be
>>> organized?
>>>
>>>
>>>
>>> (Version used: 2.08 on SLES 10 sp1)
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Sander
>>>
>>>
>>>
>>> _______________________________________________
>>> Linux-HA mailing list
>>> [email protected]
>>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>>> See also: http://linux-ha.org/ReportingProblems
>> _______________________________________________
>> Linux-HA mailing list
>> [email protected]
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> See also: http://linux-ha.org/ReportingProblems
> 
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to