Hi all,

In the past I read somewhere that by bonding more than two NICs there is a 
severe penalty on speed as TCP re-ordering needs to happen.

I'm currently building a two-node DRBD cluster that uses Infiniband for DRBD. 
The cluster offers SCST targets. I would like to offer the best speed possible 
to the iSCSI clients (which are on gigabit NICs). 
Has anyone tried 3 or 4 card bonding? What performance do you get out of this?


Thanks!,


BC 
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to