Am 13.06.2012 um 17:45 schrieb Arnold Krille:

On Wednesday 13 June 2012 09:26:45 Felix Frank wrote:
On 06/12/2012 08:23 PM, Dennis Jacobfeuerborn wrote:
Don't use crossover cables.. In my experience use crossover cables for
two

node cluster make only problems... use a simple switch..

Why would a setup with 2 cables and a switch be more reliable than just a
single cable? That doesn't make sense.

Uhm, I don't think that was Eduardo was suggesting.

Someone on this list (Digimer?) has made a good point some time about
switches allowing for better forensics in case of link problems (i.e.,
the switch can help you identify the side with a faulty NIC/cable).

On the other hand, a switch introduces one more (two really, counting
the extra required cable) possible point of replication failure. I've
never had negative experiences with back-to-back connections either.

A switch also has input- and output-buffers introducing another step of
latency.

We would use a direct link-cable if that scaled for more then 2 servers. (We actually though about just adding more network-cards, connect three servers with three cables directly and use bridges with (r)stp for the storage ring. But now we will just use the additional cards for more redundancy to have
trunked connections to two switches...)


Two switches can also be a problem:

We have bonding with 2 interfaces connected to 2 switches, each switch connected to a different uplink in HSRP mode. Some weeks ago we experienced high packet loss with the result, that the delivered web- services were more or less unusable. After change of the switches the problems were gone.

For the direct cluster connection we have 2 bonded Gbit-interfaces, but one connection only runs on 100 Mbit. Maybe a switch would help in this case for diagnosis. But I will just put additional 2 interfaces into the nodes.

Helmut Wollmersdorfer

_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to