On Thu, Sep 13, 2012 at 12:30:10AM -0400, Jake Carroll wrote:
Hi all.

I saw a question on this the other day, and thought I'd ask my own similar (but 
not the same) question.

We have 10GbE interconnects for all our cluster work within SGE/OGE. Storage is 
provided via 10GbE TOR's served out over NFS to all nodes.

We're wanting to squeeze what we can out of these switches and hardware.

In some contrast to the other post...


So:

1.  Should we be using jumbo-frame style MTU settings?

Yes.  It helps with bulk data transfers, including large NFS operations.

2.  If so – what MTU do users generally recommend? 9000? Above 9000?

9k is standard.  There are, as noted elsewere, some cards that can only
do 8k MTU (especially some common HP NICs).

3.  What of hardware flow control? Generally enable it, or are there 
precautions/corner cases where it's not a sensible thing to do?

Enabling hardware flow control on 10G switches is as close to a "magic
un-suck" option as I've seen.  I suggest turning it on.

4.  Are there any other simple network-changes/suggestions people have that we 
could stand to benefit from?

For your edge ports--i.e. compute nodes--disable STP on them, and they
should get network connectivity a bit faster.

For reference – we're using the current ROCKS distribution (6.0, Mamba) and 
Dell Powerconnect 10GbE TOR's/interconnects, coupled with Broadcom CNA's in all 
our blade/node infrastructure.

Rocks specifically has documenttaion about disabling STP ("portfast", I
think) for Dell switches.


Thanks!

--JC

_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users


--
Jesse Becker
NHGRI Linux support (Digicon Contractor)
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to