Fredrich Maney wrote:
> On Tue, Oct 20, 2009 at 8:55 AM, Tundra Slosek <ivoryring at gmail.com> wrote:
>> I have not yet. Am I misunderstanding my reading of the documentation, that 
>> a quorum server is a single point of failure for the cluster (i.e. the 
>> cluster's availability won't be any higher than the quorum server's). As it 
>> stands, I'm a little surprised because my '4 node HA cluster' seems to be 
>> less tolerant of failure than a pair of 2-node clusters would seem to be - 
>> with one of the four nodes out of the cluster, a reboot of one of the 
>> remaining 3 causes the other 2 to panic and reboot. I think I understand 
>> that this is intentional to avoid a partition, but it really feels like '4 
>> node cluster' is no more available than '3 node cluster'. If I'm going to 
>> add a 5th machine for no purpose other than to be the quorum server, would I 
>> be better off making the 5th machine a 5th node?
> 
> The QS is a very light weight process (about on par with NTP) that can
> run on pretty much any other server outside of the cluster nodes with
> no impact. It can also be used as the QS for multiple clusters. Also,
> if you are truly worried about it being a SPoF, you could have more
> than one of them on different machines - or even an HA clustered QS.
> Though that would probably be taking things a bit far.
> 
>> My current experiment, which I'm working on setting up, is to have the 
>> Private Interconnect occur over physical NICs that are not shared for any 
>> other purpose. If that doesn't work, I'll try a quorum server. Either way, 
>> I'll keep this thread updated as I go.
> 
> Unless you are using VLAN tagging, you are required to use dedicated
> NICs for the Interconnect.
> 

That's true for SC 3.2, but OHAC 2009.06 allows you to use VNICs over 
shared physical NICs for the private interconnect.

Thanks,
Nick

Reply via email to