On 02/14/2011 08:29 PM, yvette hirth wrote:
Mike Lovell wrote:

On 02/14/2011 12:59 PM, Dennis Jacobfeuerborn wrote:
I'm particularly worried about the networking side being a bottleneck for the setup. I was looking into 10gbit and infiniband equipment but they drive the cost up quite a bit and I'm not sure if they are necessary if I can bond several 1gbit interfaces.

there are several more to read through. the basics i got from glancing threads like these in the past is that bonding does okay for 2 interfaces but there isn't huge gains when going to 4 interfaces. also, considering that a 4 port ethernet card is gonna cost about 200 on the cheap end and can go much higher, using a older infiniband card or a 10gig-e card can make sense. there has been talk of inifiniband cards that can do 10gbps for under $200 on the list but i haven't actually done it myself. maybe someone else can chime in on that. i've also seen 10gig-e gear for under $500.

the big cost is the switches. 10g switches, even small ones, start at around $3k and go up from there.

for a while mellanox had a ddr ib "kit", with four or so cards, sfp's, cables, and the switch, for around $6-7k. while that's still a big budget bite to swallow, $7k to network 4 boxes at 20gbps is a fab deal.

for just a 2 server set up, switches aren't needed. at least not for 10gig-e. the 10gig-e nics i tried could just be used with a cable between them and it auto-detected the crossover. i would guess infiniband is similar. larger than 2 server set ups will require switching and get more expensive. i was assuming that the context of the question was just about the link for drbd between the servers and not the connectivity to the rest of the network.

mike
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to