You'll probably be better off looking at an infiniband-based solution. I just picked up two dual port Mellanox 10G 8x PCIe cards and a 1m cable for under US$200.
Aside from just bandwidth, the other advantage 1/4 of the latency of a gigabit ethernet setup, although my suspicion is that you might lose some of this advantage after your packets traverse through the IPoIB layers. As I'm still waiting on my ebay purchases to arrive, I might come to regret this advice once I actually try to set it up and configure it. *Caveat Emptor.* Mike On Thu, May 13, 2010 at 11:06 AM, Bart Coninckx <[email protected]>wrote: > On Thursday 13 May 2010 17:01:46 Bart Coninckx wrote: > > Hi, > > > > would it make sense to use four bonded gigabit NICs when the DRBD storage > > is, let's say, a RAID 5 on 10K RPM SAS drives? Or would 2 cards be > > sufficient? I don't have the drives and the controller yet, so I have no > > idea on what their performance is. > > > > I'd like to order all hardware at once, but it is hard to get a grip on > > possible performance values. > > > > Anyone any "feel" for this? > > > > Thx! > > > > > > Bart > > _______________________________________________ > > drbd-user mailing list > > [email protected] > > http://lists.linbit.com/mailman/listinfo/drbd-user > > > > mmm, googlen "DRBD bond four" pointed me to: > > http://www.mail-archive.com/[email protected]/msg00975.html > > where Florian explains 4 is a bad idea. Two it is then. > > Cheers, > > B. > _______________________________________________ > drbd-user mailing list > [email protected] > http://lists.linbit.com/mailman/listinfo/drbd-user > -- Dr. Michael Iverson Director of Information Technology Hatteras Printing
_______________________________________________ drbd-user mailing list [email protected] http://lists.linbit.com/mailman/listinfo/drbd-user
