We tune vm-related sysctl values on our gpfs clients.
These are values we use for 256GB+ mem hpc nodes:
vm.min_free_kbytes=2097152
vm.dirty_bytes = 3435973836
vm.dirty_background_bytes = 1717986918

The vm.dirty parameters are to prevent NFS from buffering huge amounts of 
writes and then pushing them over the network all at once flooding out gpfs 
traffic.

I'd also recommend checking client gpfs parameters pagepool and/or 
pagepoolMaxPhysMemPct to ensure you have a reasonable and understood limit for 
how much memory mmfsd will use.

Best,
Chris

On 12/1/20, 1:32 PM, "[email protected] on behalf of 
Renata Maria Dart" <[email protected] on behalf of 
[email protected]> wrote:

    Hi, some of our gpfs clients will get stale file handles for gpfs
    mounts and it seems to be related to memory depletion.  Even after the
    memory is freed though gpfs will continue be unavailable and df will
    hang.  I have read about setting vm.min_free_kbytes as a possible fix
    for this, but wasn't sure if it was meant for a gpfs server or if a
    gpfs client would also benefit, and what value should be set.

    Thanks for any insights,

    Renata


    _______________________________________________
    gpfsug-discuss mailing list
    gpfsug-discuss at spectrumscale.org
    
https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!C6sPl7C9qQ!H08HlNmBIkQRBOJKSHohzKHL6r39gAhQ3XTTczWoSmvffRFmQMcpJo8OyjMP7j-g$

________________________________

This message is for the recipient’s use only, and may contain confidential, 
privileged or protected information. Any unauthorized use or dissemination of 
this communication is prohibited. If you received this message in error, please 
immediately notify the sender and destroy all copies of this message. The 
recipient should check this email and any attachments for the presence of 
viruses, as we accept no liability for any damage caused by any virus 
transmitted by this email.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to