All,
 
Looks to me that the demands of the workload will dictate how many files we should be cache, that is: maxStatCache + maxFilesToCache .
 
The "mix" between maxStatCache and maxFilesToCache depends on how much memory can be made available. Accessing files from maxFilesToCache is more efficient, but stat cache entries use much less space.
 
With the
 
 ! maxFilesToCache 3000000
    maxStatCache 10000

 
combination, the stat cache is not providing any significant help, since only 0.3% of the files that are cached can fit in the stat cache. If enough memory is available then maxStatCache could be increased to (say) 3000000, at a cost of 1.4GB.  But maxFilesToCache = 3000000 uses up to 27GB. The next questions are then
 
1) Can such memory become available on the node, given the pagepool size ?
 
2) Does the workload require caching that many files?
 
 
  Felipe
 
----
Felipe Knop [email protected]
GPFS Development and Security
IBM Systems
IBM Building 008
2455 South Rd, Poughkeepsie, NY 12601
(845) 433-9314 T/L 293-9314
 
 
 
----- Original message -----
From: "Frederick Stock" <[email protected]>
Sent by: [email protected]
To: [email protected]
Cc: [email protected]
Subject: [EXTERNAL] Re: [gpfsug-discuss] maxStatCache and maxFilesToCache: Tip"gpfs_maxstatcache_low".
Date: Fri, Mar 13, 2020 10:01 AM
 
As you have learned there is no simple formula for setting the maxStatToCache, or for that matter the maxFilesToCache, configuration values.  Memory is certainly one consideration but another is directory listing operations.  The information kept in the stat cache is sufficient for fulfilling directory listings.  If your users are doing directory listings regularly then a larger stat cache could be helpful. 

Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
[email protected]
 
 
----- Original message -----
From: Philipp Grau <[email protected]>
Sent by: [email protected]
To: gpfsug main discussion list <[email protected]>
Cc:
Subject: [EXTERNAL] [gpfsug-discuss] maxStatCache and maxFilesToCache: Tip "gpfs_maxstatcache_low".
Date: Fri, Mar 13, 2020 8:49 AM
 
Hello,

we have a two node NSD cluster based on a DDN system.  Currently we
run Spectrum Scale 5.0.4.1 in an HPC environment.

Mmhealth shows a tip stating "gpfs_maxstatcache_low". Our current settings are:

# mmdiag --config | grep -i cache
 ! maxFilesToCache 3000000
    maxStatCache 10000

maxFilesToCache was tuned during installion and maxStatCache is the
according default value.

After discussing this issue on the german spectumscale meeting, I
understand that it is difficult to give a formula on howto calulate
this values.

But I learnt that a FilesToCache entry costs about 10 kbytes of memory
and a StatCache entry about 500 bytes. And typically maxStatCache
should (obviously) be greater than maxFilesToCache. There is a average
100 GB memory usage on our systems (with a total of 265 GB RAM).

So setting maxStatCache to at least 3000000 should be no problem. But
is that correct or to high/low?

Has anyone some hints or thoughts on this topic? Help is welcome.

Regards,

Philipp

--
 Philipp Grau               | Freie Universitaet Berlin  
 [email protected]  | Zentraleinrichtung fuer Datenverarbeitung
 Tel: +49 (30) 838 56583    | Fabeckstr. 32  
 Fax: +49 (30) 838 56721    | 14195 Berlin  
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss 

 
 
 
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
 

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to