Lroc only needs a StatCache object as it 'compacts' a full open File object (maxFilesToCache) to a StatCache Object when it moves the content to the LROC device. therefore the only thing you really need to increase is maxStatCache on the LROC node, but you still need maxFiles Objects, so leave that untouched and just increas maxStat
Olaf's comment is important you need to make sure your manager nodes have enough memory to hold tokens for all the objects you want to cache, but if the memory is there and you have enough its well worth spend a lot of memory on it and bump maxStatCache to a high number. i have tested maxStatCache up to 16 million at some point per node, but if nodes with this large amount of inodes crash or you try to shut them down you have some delays , therefore i suggest you stay within a 1 or 2 million per node and see how well it does and also if you get a significant gain. i did help Bob to setup some monitoring for it so he can actually get comparable stats, i suggest you setup Zimon and enable the Lroc sensors to have real stats too , so you can see what benefits you get. Sven On Tue, Dec 20, 2016 at 8:13 PM Matt Weil <mw...@wustl.edu> wrote: > as many as possible and both > > have maxFilesToCache 128000 > > and maxStatCache 40000 > > do these effect what sits on the LROC as well? Are those to small? > 1million seemed excessive. > > On 12/20/16 11:03 AM, Sven Oehme wrote: > > how much files do you want to cache ? > and do you only want to cache metadata or also data associated to the > files ? > > sven > > > > On Tue, Dec 20, 2016 at 5:35 PM Matt Weil <mw...@wustl.edu> wrote: > > > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Flash%20Storage > <https://www.ibm.com/developerworks/community/wikis/home?lang=en#%21/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Flash%20Storage> > > Hello all, > > Are there any tuning recommendations to get these to cache more metadata? > > Thanks > > Matt > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at > spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss >
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss