STONITH.

Second node fails: 3rd node takes over resources, but only after
verified power off (or restart) of 2nd node.

Actually - same thing for 1st node.

Challenge: Ensure that you don't lost network AND stonith at the same time.

Yan

Sander van Vugt wrote:
> Hi list,
> 
>  
> 
> Trying to think out a decent solution here, I run across the following. I
> want to build a three node cluster where some services are running. Now the
> following situation arises: I bring down one node for maintenance. Shortly
> after that a second node fails. This causes the cluster to loose quorum. The
> result? Just to be sure, the default "no_quorum_policy stop"  causes the
> cluster to stop all resources completely! Second attempt: I switch the
> no_quorum policy to "ignore". The resource keeps on running, so I am happy.
> However, in that situation, if the nodes that fails just has a network
> failure and can therefore no longer see the rest of the cluster, a perfect
> split brain arises on which the failing node as well as the remaining node
> both start offering the services. Rather uncool if these involve shared file
> systems that are not cluster safe :-). Third scenario: I set no_quorum
> policy to " freeze". At least, the services continue running on the
> remaining node, but services that were somewhere else, don't fail over
> automatically, so: still no high availability.
> 
>  
> 
> So my question: I want my services to continue running, even if a number of
> nodes remain that is less than the quorum. Is there any way this can be
> organized? 
> 
>  
> 
> (Version used: 2.08 on SLES 10 sp1)
> 
>  
> 
> Thanks,
> 
> Sander
> 
>  
> 
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to