fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + 2840)
usage. it is rather a statement of how many metadata may fit in the remaining structures outside the pagepool. this value does not change at all, when you modify your mFtC setting.
There is a really good presentation by Tomer Perry on the User Group meetings, referring about memory footprint of GPFS under various conditions.
In your case, you may very well hit the CES nodes memleak you just pointed out.
Sorry for my hasty reply earlier ;)
Achim
From: Stijn De Weirdt <[email protected]>
To: [email protected]
Date: 09/05/2019 16:48
Subject: Re: [gpfsug-discuss] advanced filecache math
Sent by: [email protected]
seems like we are suffering from
http://www-01.ibm.com/support/docview.wss?uid=isg1IJ12737
as these are ces nodes, we susepcted something wrong the caches, but it
looks like a memleak instead.
sorry for the noise (as usual you find the solution right after sending
the mail ;)
stijn
On 5/9/19 4:38 PM, Stijn De Weirdt wrote:
> hi achim,
>
>> you just misinterpreted the term fileCacheLimit.
>> This is not in byte, but specifies the maxFilesToCache setting :
> i understand that, but how does the fileCacheLimit relate to the
> fileCacheMem number?
>
>
>
> (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT), so we
> are looking for large numbers that might explain wtf is going on
> (pardon my french ;)
>
> stijn
>
>>
>> UMALLOC limits:
>> bufferDescLimit 40000 desired 40000
>> fileCacheLimit 4000 desired 4000 <=== mFtC
>> statCacheLimit 1000 desired 1000 <=== mSC
>> diskAddrBuffLimit 200 desired 200
>>
>> # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache"
>> maxFilesToCache 4000
>> maxStatCache 1000
>>
>> Mit freundlichen Grüßen / Kind regards
>>
>> *Achim Rehor*
>>
>> --------------------------------------------------------------------------------
>> Software Technical Support Specialist AIX/ Emea HPC Support
>> IBM Certified Advanced Technical Expert - Power Systems with AIX
>> TSCC Software Service, Dept. 7922
>> Global Technology Services
>> --------------------------------------------------------------------------------
>> Phone: +49-7034-274-7862 IBM Deutschland
>> E-Mail: [email protected] Am Weiher 24
>> 65451 Kelsterbach
>> Germany
>>
>> --------------------------------------------------------------------------------
>> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
>> Geschäftsführung: Martin Hartmann (Vorsitzender), Norbert Janzen, Stefan Lutz,
>> Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt
>> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB
>> 14562 WEEE-Reg.-Nr. DE 99369940
>>
>>
>>
>>
>>
>>
>> From: Stijn De Weirdt <[email protected]>
>> To: gpfsug main discussion list <[email protected]>
>> Date: 09/05/2019 16:21
>> Subject: [gpfsug-discuss] advanced filecache math
>> Sent by: [email protected]
>>
>> --------------------------------------------------------------------------------
>>
>>
>>
>> hi all,
>>
>> we are looking into some memory issues with gpfs 5.0.2.2, and found
>> following in mmfsadm dump fs:
>>
>> > fileCacheLimit 1000000 desired 1000000
>> ...
>> > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + 2840)
>>
>> the limit is 1M (we configured that), however, the fileCacheMem mentions
>> 11.7M?
>>
>> this is also reported right after a mmshutdown/startup.
>>
>> how do these 2 relate (again?)?
>>
>> mnay thanks,
>>
>> stijn
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>
>>
>>
>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
