We’re currently deploying LROC in many of our compute nodes – results so far 
have been excellent. We’re putting in 240gb SSDs, because we have mostly small 
files. As far as I know, the amount of inodes and directories in LROC are not 
limited, except by the size of the cache disk.

Look at these config options for LROC:

lrocData
Controls whether user data is populated into the local read-only cache. Other 
configuration options can be used to select the data that is eligible for the 
local read-only cache. When using more than one such configuration option, data 
that matches any of the specified criteria is eligible to be saved.
Valid values are yes or no. The default value is yes.
If lrocData is set to yes, by default the data that was not already in the 
cache when accessed by a user is subsequently saved to the local read-only 
cache. The default behavior can be overridden using thelrocDataMaxFileSize and 
lrocDataStubFileSize configuration options to save all data from small files or 
all data from the initial portion of large files.
lrocDataMaxFileSize
Limits the data that may be saved in the local read-only cache to only the data 
from small files.
A value of -1 indicates that all data is eligible to be saved. A value of 0 
indicates that small files are not to be saved. A positive value indicates the 
maximum size of a file to be considered for the local read-only cache. For 
example, a value of 32768 indicates that files with 32 KB of data or less are 
eligible to be saved in the local read-only cache. The default value is 0.
lrocDataStubFileSize
Limits the data that may be saved in the local read-only cache to only the data 
from the first portion of all files.
A value of -1 indicates that all file data is eligible to be saved. A value of 
0 indicates that stub data is not eligible to be saved. A positive value 
indicates that the initial portion of each file that is eligible is to be 
saved. For example, a value of 32768 indicates that the first 32 KB of data 
from each file is eligible to be saved in the local read-only cache. The 
default value is 0.
lrocDirectories
Controls whether directory blocks is populated into the local read-only cache. 
The option also controls other file system metadata such as indirect blocks, 
symbolic links, and extended attribute overflow blocks.
Valid values are yes or no. The default value is yes.
lrocInodes
Controls whether inodes from open files is populated into the local read-only 
cache; the cache contains the full inode, including all disk pointers, extended 
attributes, and data.
Valid values are yes or no. The default value is yes.


Bob Oesterlin
Sr Principal Storage Engineer, Nuance
507-269-0413



From: <[email protected]> on behalf of Matt Weil 
<[email protected]>
Reply-To: gpfsug main discussion list <[email protected]>
Date: Tuesday, December 20, 2016 at 1:13 PM
To: "[email protected]" <[email protected]>
Subject: [EXTERNAL] Re: [gpfsug-discuss] LROC


as many as possible and both

have maxFilesToCache 128000

and maxStatCache 40000

do these effect what sits on the LROC as well?  Are those to small? 1million 
seemed excessive.

On 12/20/16 11:03 AM, Sven Oehme wrote:
how much files do you want to cache ?
and do you only want to cache metadata or also data associated to the files ?

sven



On Tue, Dec 20, 2016 at 5:35 PM Matt Weil 
<[email protected]<mailto:[email protected]>> wrote:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Flash%20Storage<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ibm.com_developerworks_community_wikis_home-3Flang-3Den-23-2521_wiki_General-2520Parallel-2520File-2520System-2520-2528GPFS-2529_page_Flash-2520Storage&d=DgMD-g&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=D9PgR75CqOuL7nxcynOJv1fayX8BDPrsoSt83TFc8sE&s=6G1jlkjDVKTD9tPVCnAIlmVwgzjLahUX8GYNQrSmvBM&e=>

Hello all,

Are there any tuning recommendations to get these to cache more metadata?

Thanks

Matt

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at 
spectrumscale.org<https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DgMD-g&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=D9PgR75CqOuL7nxcynOJv1fayX8BDPrsoSt83TFc8sE&s=H8L4Lw9j9AYJkyQbnBT3li4LKSJ83994bWWYdc3GylU&e=>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss<https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DgMD-g&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=D9PgR75CqOuL7nxcynOJv1fayX8BDPrsoSt83TFc8sE&s=nAjj-yiG02HTaf_5AiE5LZA-gYS27y2Diy_FcbIDT-0&e=>




_______________________________________________

gpfsug-discuss mailing list

gpfsug-discuss at spectrumscale.org

http://gpfsug.org/mailman/listinfo/gpfsug-discuss<https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DgMD-g&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=D9PgR75CqOuL7nxcynOJv1fayX8BDPrsoSt83TFc8sE&s=nAjj-yiG02HTaf_5AiE5LZA-gYS27y2Diy_FcbIDT-0&e=>


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to