Hi Kyle,

Thanks for your input.

These cards will be used for testing an IDS setup.. So I guess the answers would be:
- No FCoE
- No RDMA
- About latency vs. throughput I'd guess it's all related to the ability of the host server to process the packets efficiently without loosing much. I wouldn't know what to tell the network people there... I guess the Qlogic and Emulex cards would fit this 'everyday use' but I have no idea if they'll need to deal with multicast. Any futher suggestions?
Thanks,

Vincent

On Tue, 19 Oct 2010, Kyle O'Donnell wrote:

We've been doing a LOT of 10GB nic testing, and what I can tell you is
that it really depends on what you are doing.

Do you need FCoE?
Do you need RDMA?

Do you need low latency?

Do you need high bandwidth and lowest possible latency?

Do you need the lowest possible latency, but don't care about as much
about bandwidth?

Do you care about either?

The qlogic and emulex 10gb nics work just fine, but are geared towards
things like FCoE, and although they work for everyday use we've had
problems with things like multicast.

We've had serious problems with the intel neteffect cards as the first
generation were prone to heating up and melting, and the second
generation caused kernel panics.  We are now testing their third
generation and have not had any issues *yet*

We have had good luck with the Chelsio cards.

We have had great lab results with the mellanox cards.

We have had good luck with the solarflare cards as well (low latency
cards, not really great for high bandwidth, requires preloader for
lowest latency... lots of caveats to consider).


In every single case we have had to use the most recent drivers for
these cards as the ones that come (if they come) with red hat are either
buggy or do not preform as well as the newer drivers.


_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to