You will have to be careful with the Intel gear.   It works ok for most 
deployments but with my setup I had problems.   I deployed first with 520SRs 
(on two Dell R710)  and found that they enter a mode where they latch/lock up.  
The would respond to pings but not send any data.  This caused all sorts of 
issues.    I had to use Broadcom instead,  this was a real pain since we bought 
4 Intel cards, now having to buy 4 Broadcom ones.   Most of my clients use 
Broadcom and were working fine at 10G for the past 4 years.  DRBD for my setup 
supports 2 volumes that can be read/written to at a rate of 640Mbytes/sec each 
via network bonding.   That is with roundtrip commit (protocol C).

James



-----Original Message-----
From: [email protected] 
[mailto:[email protected]] On Behalf Of Shaun Thomas
Sent: Friday, January 11, 2013 9:53 AM
To: Andy Dills
Cc: [email protected]
Subject: Re: [DRBD-user] Poor fsync performance

On 01/11/2013 08:10 AM, Andy Dills wrote:

> Any positive recommendations on 10Gb NICs?

Intel is something of a gold standard in that area. We use them exclusively. 
Stay far, far away from anything with a Broadcom chip.

A major benefit is that the 10Gb cards usually have so much bandwidth, you can 
drop all of your other 1Gb interfaces and reduce cabling slightly. We *were* 
using dual bonded onboard 1Gb for regular traffic, and 10Gb for DRBD only over 
a crossover, but found we lost nothing by just doing everything over the 10Gb.

Good luck!

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
[email protected]

______________________________________________

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email _______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to