Perhaps this presentation can help you:
https://pt.slideshare.net/tomerperry/ibm-spectrum-scale-memory-usage
 
Abraços / Regards / Saludos,

 

Anderson Nobre
Power and Storage Consultant
IBM Systems Hardware Client Technical Team – IBM Systems Lab Services

community_general_lab_services
 

Phone: 55-19-2132-4317
E-mail:
IBM
 
 
----- Original message -----
From: Jim Doherty <[email protected]>
Sent by: [email protected]
To: gpfsug main discussion list <[email protected]>
Cc:
Subject: [EXTERNAL] Re: [gpfsug-discuss] advanced filecache math
Date: Thu, May 9, 2019 6:08 PM
 
 
 
A couple of observations on memory,   a maxFilesToCache object takes anwhere from 6-10K, so 1 million =~ 6-10 Gig.    Memory utilized in the mmfsd comes from either the pagepool,  the shared memory segment used by MFTC objects,  the token memory segment used to track MFTC objects,   and newer is  memory used by AFM.    If the memory resources are in the mmfsd address space then this will show in the RSS size of the mmfsd.    Ignore the VMM size,  since the glibc change awhile back to allocate a heap for each thread VMM has become an imaginary number for a large multi-threaded application.  
 
There have been some memory leaks fixed in Ganesha that will be in  4.2.3 PTF15 which is available on fixcentral
 
 
Jim Doherty
 
 
On Thursday, May 9, 2019, 1:25:03 PM EDT, Sven Oehme <[email protected]> wrote:
 
 
Unfortunate more complicated :)
 
The consumption here is an estimate based on 512b inodes, which no newly created filesystem has as all new default to 4k. So if you have 4k inodes you could easily need 2x of the estimated value.
Then there are extended attributes, also not added here, etc .
So don't take this number as usage, it's really just a rough estimate.
 
Sven
 
 
On Thu, May 9, 2019, 5:53 PM Achim Rehor <[email protected]> wrote:
Sorry for my fast ( and not well thought) answer, before. You obviously are correct, there is no relation between the setting of maxFilesToCache, and the

fileCacheMem     38359956 k  = 11718554 * 3352 bytes (inode size 512 + 2840)

usage. it is rather a statement of how many metadata may fit in the remaining structures outside the pagepool. this value does not change at all, when you modify your mFtC setting.

There is a really good presentation by Tomer Perry on the User Group meetings, referring about memory footprint of GPFS under various conditions.

In your case, you may very well hit the CES nodes memleak you just pointed out.

Sorry for my hasty reply earlier ;)

Achim



From:        Stijn De Weirdt <[email protected]>
To:        [email protected]
Date:        09/05/2019 16:48
Subject:        Re: [gpfsug-discuss] advanced filecache math
Sent by:        [email protected]



seems like we are suffering from
http://www-01.ibm.com/support/docview.wss?uid=isg1IJ12737

as these are ces nodes, we susepcted something wrong the caches, but it
looks like a memleak instead.

sorry for the noise (as usual you find the solution right after sending
the mail ;)

stijn

On 5/9/19 4:38 PM, Stijn De Weirdt wrote:
> hi achim,
>
>> you just misinterpreted the term fileCacheLimit.
>> This is not in byte, but specifies the maxFilesToCache setting :
> i understand that, but how does the fileCacheLimit relate to the
> fileCacheMem number?
>
>
>
> (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT), so we
> are looking for large numbers that might explain wtf is going on
> (pardon my french ;)
>
> stijn
>
>>
>> UMALLOC limits:
>>      bufferDescLimit      40000 desired    40000
>>      fileCacheLimit  4000 desired     4000   <=== mFtC
>>      statCacheLimit  1000 desired     1000   <=== mSC
>>      diskAddrBuffLimit      200 desired      200
>>
>> # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache"
>>     maxFilesToCache 4000
>>     maxStatCache 1000
>>
>> Mit freundlichen Grüßen / Kind regards
>>
>> *Achim Rehor*
>>
>> --------------------------------------------------------------------------------
>> Software Technical Support Specialist AIX/ Emea HPC Support                
>> IBM Certified Advanced Technical Expert - Power Systems with AIX
>> TSCC Software Service, Dept. 7922
>> Global Technology Services
>> --------------------------------------------------------------------------------
>> Phone:                 +49-7034-274-7862                  IBM Deutschland
>> E-Mail:                 [email protected]                  Am Weiher 24
>>                                    65451 Kelsterbach
>>                                    Germany
>>                                  
>> --------------------------------------------------------------------------------
>> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
>> Geschäftsführung: Martin Hartmann (Vorsitzender), Norbert Janzen, Stefan Lutz,
>> Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt
>> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB
>> 14562 WEEE-Reg.-Nr. DE 99369940
>>
>>
>>
>>
>>
>>
>> From: Stijn De Weirdt <[email protected]>
>> To: gpfsug main discussion list <[email protected]>
>> Date: 09/05/2019 16:21
>> Subject: [gpfsug-discuss] advanced filecache math
>> Sent by: [email protected]
>>
>> --------------------------------------------------------------------------------
>>
>>
>>
>> hi all,
>>
>> we are looking into some memory issues with gpfs 5.0.2.2, and found
>> following in mmfsadm dump fs:
>>
>>  >     fileCacheLimit     1000000 desired  1000000
>> ...
>>  >     fileCacheMem     38359956 k  = 11718554 * 3352 bytes (inode size 512 + 2840)
>>
>> the limit is 1M (we configured that), however, the fileCacheMem mentions
>> 11.7M?
>>
>> this is also reported right after a mmshutdown/startup.
>>
>> how do these 2 relate (again?)?
>>
>> mnay thanks,
>>
>> stijn
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>
>>
>>
>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org

http://gpfsug.org/mailman/listinfo/gpfsug-discuss




_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
 

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to