So your saying maxStatCache should be raised on LROC enabled nodes only as its 
the only place under Linux its used and should be set low on non-LROC enabled 
nodes.

Fine just good to know, nice and easy now with nodeclasses....

Peter Childs


________________________________________
From: [email protected] 
<[email protected]> on behalf of Sven Oehme 
<[email protected]>
Sent: Wednesday, December 21, 2016 11:37:46 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] LROC

StatCache is not useful on Linux, that hasn't changed if you don't use LROC on 
the same node. LROC uses the compact object (StatCache) to store its pointer to 
the full file Object which is stored on the LROC device. so on a call for 
attributes that are not in the StatCache the object gets recalled from LROC and 
converted back into a full File Object, which is why you still need to have a 
reasonable maxFiles setting even you use LROC as you otherwise constantly move 
file infos in and out of LROC and put the device under heavy load.

sven



On Wed, Dec 21, 2016 at 12:29 PM Peter Childs 
<[email protected]<mailto:[email protected]>> wrote:
My understanding was the maxStatCache was only used on AIX and should be set 
low on Linux, as raising it did't help and wasted resources. Are we saying that 
LROC now uses it and setting it low if you raise maxFilesToCache under linux is 
no longer the advice.


Peter Childs


________________________________________
From: 
[email protected]<mailto:[email protected]>
 
<[email protected]<mailto:[email protected]>>
 on behalf of Sven Oehme <[email protected]<mailto:[email protected]>>
Sent: Wednesday, December 21, 2016 9:23:16 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] LROC

Lroc only needs a StatCache object as it 'compacts' a full open File object 
(maxFilesToCache) to a StatCache Object when it moves the content to the LROC 
device.
therefore the only thing you really need to increase is maxStatCache on the 
LROC node, but you still need maxFiles Objects, so leave that untouched and 
just increas maxStat

Olaf's comment is important you need to make sure your manager nodes have 
enough memory to hold tokens for all the objects you want to cache, but if the 
memory is there and you have enough its well worth spend a lot of memory on it 
and bump maxStatCache to a high number. i have tested maxStatCache up to 16 
million at some point per node, but if nodes with this large amount of inodes 
crash or you try to shut them down you have some delays , therefore i suggest 
you stay within a 1 or 2  million per node and see how well it does and also if 
you get a significant gain.
i did help Bob to setup some monitoring for it so he can actually get 
comparable stats, i suggest you setup Zimon and enable the Lroc sensors to have 
real stats too , so you can see what benefits you get.

Sven

On Tue, Dec 20, 2016 at 8:13 PM Matt Weil 
<[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>
 wrote:

as many as possible and both

have maxFilesToCache 128000

and maxStatCache 40000

do these effect what sits on the LROC as well?  Are those to small? 1million 
seemed excessive.

On 12/20/16 11:03 AM, Sven Oehme wrote:
how much files do you want to cache ?
and do you only want to cache metadata or also data associated to the files ?

sven



On Tue, Dec 20, 2016 at 5:35 PM Matt Weil 
<[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>
 wrote:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Flash%20Storage<https://www.ibm.com/developerworks/community/wikis/home?lang=en#%21/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Flash%20Storage>

Hello all,

Are there any tuning recommendations to get these to cache more metadata?

Thanks

Matt

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at 
spectrumscale.org<http://spectrumscale.org><http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at 
spectrumscale.org<http://spectrumscale.org><http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at 
spectrumscale.org<http://spectrumscale.org><http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to