in addition ...
depending on you block size and the multi threaded NFS .. IO's may not come in the right order to GPFS so that GPFS can't recognize sequential or random IO access patterns correctly...
therefore adjust: nfsPrefetchStrategy  default [0]  to [1-10]
it tells GPFS to consider all inflight IOs as sequential with in this number of block boundaries..

consider further more: prefetchPCt  / pagepool

to MFTC .. check by mmfsadm saferdump fs   your current utilization if you 're hit the limit or hoe many files or open..
Mit freundlichen Grüßen / Kind regards

 
Olaf Weiser

EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform,
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland
IBM Allee 1
71139 Ehningen
Phone: +49-170-579-44-66
E-Mail: olaf.wei...@de.ibm.com
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940




From:        Bryan Banister <bbanis...@jumptrading.com>
To:        gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:        10/17/2016 09:00 AM
Subject:        Re: [gpfsug-discuss] CES and NFS Tuning suggestions
Sent by:        gpfsug-discuss-boun...@spectrumscale.org




One major issue is the maxFilesToCache and maybe the maxStatCache (though I hear that Linux negates the use of this parameter now?  I don’t quite remember).  Ganesha apparently likes to hold open a large number of files and this means that it will quickly fill up the maxFilesToCache.  When this happens the [gpfsSwapdKproc] process will start to eat up CPU time.  This is the daemon that tries to find a file to evict from the cache when a new file is opened.  This overhead will also hurt performance.  
 
IBM in a PMR we opened suggested setting this to something like 5 Million for the protocol nodes.  I think we started with 1.5 Million.  You have to be mindful of memory requirements on the token servers to handle the total sum of all maxFilesToCache settings from all nodes that mount the file system.
 
Of course the other, standard NFS tuning parameters for number of threads and NFS client mount options still should be adjusted too.
 
Hope that helps,
-Bryan
 
From: gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Oesterlin, Robert
Sent:
Sunday, October 16, 2016 7:06 PM
To:
gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject:
[gpfsug-discuss] CES and NFS Tuning suggestions

 
Looking for some pointers or suggestions on what I should look at changing in Linux and/or GPFS "mmchconfg" settings to help boost NFS performance. Out of the box it seems "poor".
 
 
Bob Oesterlin
Sr Storage Engineer, Nuance HPC Grid
507-269-0413

 




Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to