On Oct 23, 2010, at 2:23 PM, Robinson, Eric wrote:

>> Looks like you are mixing up physical connections 
>> and Corosync rings. 
> 
> I should not have mentioned DRBD at all as it confuses the question. 
> 
> Let me try it this way:
> 
> How do I build a three-node Corosync cluster with redundant heartbeat
> paths? I don't trust the switched network or the Ethernet bonding
> drivers to be 100% reliable, and it is just good practice to have
> multiple heartbeat paths. On my old 2-node clusters, I have three
> heartbeat paths: the switched network, back-to-back links, and serial
> cables. 

At some point you would have to trust Ethernet. I think SLIP is dead :)
When you have two independent switches on two different UPS you already have a 
redundancy, right?
If you are not comfortable with just one extra path, by all means, add 
interfaces, add bonding, 
have 4 Ethernet cards in each server with 4 different switches - sky is the 
limit :)
But I would say if you have a "backbone" switch and "drbd/iscsi/SAN" switch and 
still use heartbeat, it's a good enough solution for the common case.


> 
> It sounds like you are saying that to have multiple heartbeat paths on a
> 3-node Corosync cluster, each heartbeat path must be through a separate
> switched network or VLAN. I can see why this would be the case.
> 
> I was hoping that a crossover cable could be used to form a "logical"
> ring between two nodes, and that I could configure two logical rings
> between 3 servers. 

Token ring? Also dead :)

> 
> So really, maybe I'm not trying to build a 3-node cluster. What I'm
> really trying to build are two 2-node clusters where one of the physical
> servers participates in BOTH 2-node clusters. CLUSTER1 would consist of
> physical servers A and C. CLUSTER2 would consist of physical servers B
> and C.
> 
> So maybe what I want to know is, is it possible to run multiple
> "instances" of Corosync on server C, such that it participates in two
> separate clusters?

You can have as many corosyncs as your memory permit, but I afraid each of them 
would have to run it's own DomU/VMWare/whatever :)
And maybe it even can have some practical implementation, who knows

> 
> Thanks for your patience. I had no idea this would end up being so
> complicated. "3-node cluster" is much easier to say than to configure,
> apparently. :-)

It really isn't :)

Vadym
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to