You mention that all the NSDs are metadata and data but you do not say how 
many NSDs are defined or the type of storage used, that is are these on 
SAS or NL-SAS storage?  I'm assuming they are not on SSDs/flash storage.

Have you considered moving the metadata to separate NSDs, preferably 
SSD/flash storage?  This is likely to give you a significant performance 

You state that using  the inode scan API you reduced the time to 40 days. 
Did you analyze your backup application to determine where the time was 
being spent for the backup?  If the inode scan is a small percentage of 
your backup time then optimizing it will not provide much benefit.

Fred Stock | IBM Pittsburgh Lab | 720-430-8821

From:   "" <>
To:     "" 
Date:   02/08/2018 05:50 AM
Subject:        [gpfsug-discuss] Inode scan optimization
Sent by:

Hello All,
A full backup of an 2 billion inodes spectrum scale file system on 
V4.1.1.16 takes 60 days.
We try to optimize and using inode scans seems to improve, even when we 
are using a directory scan and the inode scan just for having a better 
performance concerning stat (using gpfs_stat_inode_with_xattrs64). With 20 
processes in parallel doing dir scans (+ inode scans for stat info) we 
have decreased the time to 40 days.
All NSDs are dataAndMetadata type.
I have the following questions:
·         Is there a way to increase the inode scan cache (we may use 32 
o   Can we us the “hidden” config parameters
§     iscanPrefetchAggressiveness 2
§     iscanPrefetchDepth 0
§     iscanPrefetchThreadsPerNode 0
·         Is there a documentation concerning cache behavior?
o   if no, is the  inode scan cache process or node specific?
o   Is there a suggestion to optimize the termIno parameter in the 
gpfs_stat_inode_with_xattrs64() in such a use case?
Best regards,
Tomasz Wolski_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at

gpfsug-discuss mailing list
gpfsug-discuss at

Reply via email to