Hi Kenneth,
taking the network/TCP part...

yes , you need the socketMaxListenConnections (1024), in case you have so many nodes...
keep in mind, that in addition, your operating system may need to be adjusted as well

e.g. depending on your OS ...
[root@ems1 patch]# sysctl  net.core.somaxconn  
net.core.somaxconn = 1024
saphana1:/hana/shared # sysctl  net.core.somaxconn
net.core.somaxconn = 128




From:        Kenneth Waegeman <[email protected]>
To:        gpfsug main discussion list <[email protected]>
Date:        09/06/2017 01:55 PM
Subject:        Re: [gpfsug-discuss] Change to default for verbsRdmaMinBytes?
Sent by:        [email protected]




Hi Sven,

I see two parameters that we have set to non-default values that are not in your list of options still to configure.

verbsRdmasPerConnection (256) and
socketMaxListenConnections (1024)

I remember we had to set socketMaxListenConnections because our cluster consist of +550 nodes.

Are these settings still needed, or is this also tackled in the code?

Thank you!!

Cheers,
Kenneth


On 02/09/17 00:42, Sven Oehme wrote:
Hi Ed,

yes the defaults for that have changed for customers who had not overridden the default settings. the reason we did this was that many systems in the field including all ESS systems that come pre-tuned where manually changed to 8k from the 16k default due to better performance that was confirmed in multiple customer engagements and tests with various settings , therefore we change the default to what it should be in the field so people are not bothered to set it anymore (simplification) or get benefits by changing the default to provides better performance. 
all this happened when we did the communication code overhaul that did lead to significant (think factors) of improved RPC performance for RDMA and VERBS workloads. 
there is another round of significant enhancements coming soon , that will make even more parameters either obsolete or change some of the defaults for better out of the box performance.
i see that we should probably enhance the communication of this changes, not that i think this will have any negative effect compared to what your performance was with the old setting i am actually pretty confident that you get better performance with the new code, but by setting parameters back to default on most 'manual tuned' probably makes your system even faster. 
if you have a Scale Client on 4.2.3+ you really shouldn't have anything set beside maxfilestocache, pagepool, workerthreads and potential prefetch , if you are a protocol node, this and settings specific to an  export (e.g. SMB, NFS set some special settings) , pretty much everything else these days should be set to default so the code can pick the correct parameters., if its not and you get better performance by manual tweaking something i like to hear about it.
on the communication side in the next release will eliminate another set of parameters that are now 'auto set' and we plan to work on NSD next. 
i presented various slides about the communication and simplicity changes in various forums, latest public non NDA slides i presented are here --> http://files.gpfsug.org/presentations/2017/Manchester/08_Research_Topics.pdf

hope this helps . 

Sven



On Fri, Sep 1, 2017 at 1:56 PM Edward Wahl <[email protected]> wrote:
Howdy.   Just noticed this change to min RDMA packet size and I don't seem to
see it in any patch notes.  Maybe I just skipped the one where this changed?

 mmlsconfig verbsRdmaMinBytes
verbsRdmaMinBytes 16384

(in case someone thinks we changed it)

[root@proj-nsd01 ~]# mmlsconfig |grep verbs
verbsRdma enable
verbsRdma disable
verbsRdmasPerConnection 14
verbsRdmasPerNode 1024
verbsPorts mlx5_3/1
verbsPorts mlx4_0
verbsPorts mlx5_0
verbsPorts mlx5_0 mlx5_1
verbsPorts mlx4_1/1
verbsPorts mlx4_1/2


Oddly I also see this in config, though I've seen these kinds of things before.
mmdiag --config |grep verbsRdmaMinBytes
   verbsRdmaMinBytes 8192

We're on a recent efix.
Current GPFS build: "4.2.2.3 efix21 (1028007)".

--

Ed Wahl
Ohio Supercomputer Center

614-292-9302
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at
spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to